Jasper’s Music Player

ChildrenofMen-player03

After Jasper tells a white lie to Theo, Miriam, and Kee to get them to escape the advancing gang of Fishes, he returns indoors. To set a mood, he picks up a remote control and presses a button on it while pointing it at a display.

ChildrenofMen-player02

He watches a small transparent square that rests atop some things in a nook. (It’s that decimeter-square, purplish thing on the left of the image, just under the lampshade.) The display initially shows an album queue, with thumbnails of the album covers and two bright words, unreadably small. In response to his button press, the thumbnail for Franco Battiato’s album FLEURs slides from the right to the left. A full song list for the album appears beneath the thumbnail. Then track two, the cover of Ruby Tuesday, begins to play. A small thumbnail to the right of the album cover appears, featuring some white text on a dark background and a cycling, animated border. Theo puts the remote control down, picks up the Quietus box, and walks over to Janice. *sniff*

This small bit of speculative consumer electronics gets around 17 seconds of screen time, but we see enough to consider the design.  Continue reading

Scenery display

BttF_096Jennifer is amazed to find a window-sized video display in the future McFly house. When Lorraine arrives at the home, she picks up a remote to change the display. We don’t see it up close, but it looks like she presses a single button to change the scene from a sculpted garden to one of a beach sunset, a city scape, and a windswept mountaintop. It’s a simple interface, though perhaps more work than necessary.

We don’t know how many scenes are available, but having to click one button to cycle through all of them could get very frustrating if there’s more than say, three. Adding a selection ring around the button would allow the display to go from a selected scene to a menu from which the next one might be selected from amongst options.

J.D.E.M. LEVEL 5

The first computer interface we see in the film occurs at 3:55. It’s an interface for housing and monitoring the tesseract, a cube that is described in the film as “an energy source” that S.H.I.E.L.D. plans to use to “harness energy from space.” We join the cube after it has unexpectedly and erratically begun to throw off low levels of gamma radiation.

The harnessing interface consists of a housing, a dais at the end of a runway, and a monitoring screen.

Avengers-cubemonitoring-07

Fury walks past the dais they erected just because.

The housing & dais

The harness consists of a large circular housing that holds the cube and exposes one face of it towards a long runway that ends in a dais. Diegetically this is meant to be read more as engineering than interface, but it does raise questions. For instance, if they didn’t already know it was going to teleport someone here, why was there a dais there at all, at that exact distance, with stairs leading up to it? How’s that harnessing energy? Wouldn’t you expect a battery at the far end? If they did expect a person as it seems they did, then the whole destroying swaths of New York City thing might have been avoided if the runway had ended instead in the Hulk-holding cage that we see later in the film. So…you know…a considerable flaw in their unknown-passenger teleportation landing strip design. Anyhoo, the housing is also notable for keeping part of the cube visible to users near it, and holding it at a particular orientation, which plays into the other component of the harness—the monitor.

Avengers-cubemonitoring-03 Continue reading

Precrime forearm-comm

MinRep-068

Though most everyone in the audience left Minority Report with the precrime scrubber interface burned into their minds (see Chapter 5 of the book for more on that interface), the film was loaded with lots of other interfaces to consider, not the least of which were the wearable devices.

Precrime forearm devices

These devices are worn when Anderton is in his field uniform while on duty, and are built into the material across the left forearm. On the anterior side just at the wrist is a microphone for communications with dispatch and other officers. By simply raising that side of his forearm near his mouth, Anderton opens the channel for communication. (See the image above.)

MinRep-080

There is also a basic circular display in the middle of the posterior left forearm that displays a countdown for the current mission: The time remaining before the crime that was predicted to occur should take place. The text is large white characters against a dark background. Although the translucency provides some visual challenge to the noisy background of the watch (what is that in there, a Joule heating coil?), the jump-cut transitions of the seconds ticking by commands the user’s visual attention.

On the anterior forearm there are two visual output devices: one rectangular perpetrator information (and general display?) and one amber-colored circular one we never see up close. In the beginning of the film Anderton has a man pinned to the ground and scans his eyes with a handheld Eyedentiscan device. Through retinal biometrics, the pre-offender’s identity is confirmed and sent to the rectangular display, where Anderton can confirm that the man is a citizen named Howard Marks.

Wearable analysis

Checking these devices against the criteria established in the combadge writeup, it fares well. This is partially because it builds on a century of product evolution for the wristwatch.

It is sartorial, bearing displays that lay flat against the skin connected to soft parts that hold them in place.

They are social, being in a location other people are used to seeing similar technology.

It is easy to access and use for being along the forearm. Placing different kinds of information at different spots of the body means the officer can count on body memory to access particular data, e.g. Perp info is anterior middle forearm. That saves him the cognitive load of managing modes on the device.

The display size for this rectangle is smallish considering the amount of data being displayed, but being on the forearm means that Anderton can adjust its apparent size by bringing it closer or farther from his face. (Though we see no evidence of this in the film, it would be cool if the amount of information changed based on distance-to-the-observer’s face. Writing that distanceFromFace() algorithm might be tricky though.)

There might be some question about accidental activation, since Anderton could be shooting the breeze with his buddies while scratching his nose and mistakenly send a dirty joke to a dispatcher, but this seems like an unlikely and uncommon enough occurrence to simply not worry about it.

Using voice as the input is cinemagenic, but especially in his line of work a subvocalization input would keep him more quiet—and therefore safer— in the field. Still, voice inputs are fast and intuitive, making for fairly apposite I/O. Ideally he might have some haptic augmentation of the countdown, and audio augmentation of the info so Anderton wouldn’t have to pull his arm and attention away from the perpetrator, but as long as the information is glanceable and Anderton is merely confirming data (rather than new information), recognition is a fast enough cognitive process that this isn’t too much of a problem.

All in all, not bad for a “throwaway” wearable technology.

Dispatch

LOGANS_RUN_map_520

At dispatch for the central computer, Sandmen monitor a large screen that displays a wireframe plan of the city, including architectural detail and even plants, all color coded using saturated reds, greens, and blues. When a Sandman has accepted the case of a runner, he appears as a yellow dot on the screen. The runner appears as a red dot. Weapons fire can even be seen as a bright flash of blue. The red dots of terminated runners fades from view.

Using the small screens and unlabeled arrays of red and yellow lit buttons situated on an angled panel in front of them, the seated Sandman can send a call out to catch runners, listen to any spoken communications, and respond with text and images.

LogansRun094

*UXsigh* What are we going to do with this thing? With an artificial intelligence literally steps behind them, why rely on a slow bunch of humans at all for answering questions and transmitting data? It might be better to just let the Sandmen do what they’re good at, and let the AI handle what it’s good at.

But OK, if it’s really that limited of an Übercomputer and can only focus on whatever is occupying it at the moment, at least make the controls usable by people. Let’s do the hard work of reducing the total number of controls, so they can be clustered all within easy reach rather than spread out so you have to move around just to operate them all. Or use your feet or whatever. Differentiate the controls so they are easy to tell apart by sight and touch rather than this undifferentiated mess. Let’s take out a paint pen and actually label the buttons. Do…do something.

LogansRun095

This display could use some rethinking as well. It’s nice that it’s overhead, so that dispatch can be thinking about field strategy rather than ground tactics. But if that’s the case, it could use some design help and some strategic information. How about downplaying the saturation on the things that don’t matter that much, like walls and plants? Then the Sandmen can focus more on the interplay of the Runner and his assailants. Next you could augment the display with information about the runner, and perhaps a best-guess prediction of where they’re likely to run, maybe the health of individuals, or the amount of ammunitition they have.

Which makes me realize that more than anything, this screen could use the hand of a real-time strategy game user interface designer, because that’s what they’re doing. The Sandmen are playing a deadly, deadly video game right here in this room, and they’re using a crappy interface to try and win it.

Mission Briefing

Once the Prometheus crew has been fully revived from their hypersleep, they gather in a large gymnasium to learn the details of their mission from a prerecorded volumetric projection. To initiate the display, David taps the surface of a small tablet-sized handheld device six times, and looks up. A prerecorded VP of Peter Weyland appears and introduces the scientists Shaw and Holloway.

This display does not appear to be interactive. Weyland does mention and gesture toward Shaw and Holloway in the audience, but they could have easily been in assigned seats.

Cue Rubik’s Space Cube

After his introduction, Holloway places an object on the floor that looks like a silver Rubik’s Cube with a depressed black button in the center-top square.

Prometheus-055

He presses a middle-edge button on the top, and the cube glows and sings a note. Then a glowing-yellow “person” icon appears, glowing, at the place he touched, confirming his identity and that it’s ready to go.

He then presses an adjacent corner button. Another glowing-yellow icon appears underneath his thumb, this one a triangle-within-a-triangle, and a small projection grows from the side. Finally, by pressing the black button, all of the squares on top open by hinged lids, and the portable projection begins. A row of 7 (or 8?) “blue-box” style volumetric projections appear, showing their 3D contents with continuous, slight rotations.

Gestural control of the display

After describing the contents of each of the boxes, he taps the air towards either end of the row (there is a sparkle-sound to confirm the gesture) and he brings his middle fingers together like a prayer position. In response, the boxes slide to a center as a stack.

He then twists his hands in opposite directions, keeping the fingerpads of his middle fingers in contact. As he does this, the stack merges.

Prometheus-070

Then a forefinger tap summons an overlay that highlights a star pattern on the first plate. A middle finger swipe to the left moves the plate and its overlay off to the left. The next plate automatically highlights its star pattern, and he swipes it away. Next, with no apparent interaction, the plate dissolves in a top-down disintegration-wind effect, leaving only the VP spheres that illustrate the star pattern. These grow larger.

Halloway taps the topmost of these spheres, and the VP zooms through intersteller space to reveal an indistinct celestial sphere. He then taps the air again (nothing in particular is beneath his finger) and the display zooms to a star. Another tap zooms to a VP of LV-223.

Prometheus_VP-0030

Prometheus_VP-0031

After a beat of about 9 seconds, the presentation ends, and the VP of LV-223 collapses back into its floor cube.

Evaluating the gestures

In Chapter 5 of Make It So we list the seven pidgin gestures that Hollywood has evolved. The gestures seen in the Mission Briefing confirm two of these: Push to Move and Point to Select, but otherwise they seem idiosyncratic, not matching other gestures seen in the survey.

That said, the gestures seem sensible. On tapping the “bookends” of the blue boxes, Holloway’s finger pads come to represent the extents of the selection, so bringing them together is a reasonable gesture to indicate stacking. The twist gesture seems to lock the boxes in place, to break the connection between them and his fingertips. This twist gesture turns his hand like a key in a lock, so has a physical analogue.

It’s confusing that a tap would perform four different actions (highlight star patterns in the blue boxes, zoom to the celestial sphere, zoom to star, zoom to LV-223) but there is no indication that this is a platform for manipulating VPs as much as it is a presentation software. With this in mind he could arbitrarily assign any gesture to simply “advance the slide.”

Floating-pixel displays

In other posts we compared the human and alien VPs of Prometheus. They were visually distinct from each other, with the alien “glowing pollen” displays being unique to this movie.

There is a style of human display in Prometheus that looks similar to the pollen. Since the users of these displays don’t perceive these points in 3D, it’s more precise to call it a floating-pixel style. These floating-pixel displays appear in three places.

  • David’s Neurovisor for peering into the dreams of the hypersleeping Shaw. (Note this may be 3D for him.)
  • The landing-sequence topography displays
  • The science lab scanner, used on the alien head

Prometheus-007

Prometheus-096

Prometheus-165

There is no diegetic reason offered in the movie for the appearance of an alien 3D display technology in human 2D systems. When I started to try and explain it, it quickly drifted away from interaction design and into fan theory, so I have left it as an exercise for the reader. But there remains a question about the utility of this style.

Poor cues for understanding 3D

Floating, glowing points are certainly novel to our survey as a way to describe 3D shapes for users. And in the case of the alien pollen, it makes some sense. Seeing these in the world, our binocular vision would help us understand the relationships of each point as well as the gestalt, like walking around a Christmas tree at night.

But in 2D, simple points are not ideal for understanding 3D surfaces. Especially when the pixels are all the same apparent size. We normally use the small bits of scale to help us understand an object’s relative distance from us. Though the shape can be kind-of inferred through motion, it still creates a great deal of visual noise. It also hurts when the points are too far apart. It doesn’t give us a gestalt sense of surface.

Prometheus-098

floating-points

I couldn’t find any scientific studies of the readability of this style, this is my personal take on it. But we also can look to the real world, namely to the history of maps, where cartographers have wrestled with similar problems to show topography. Centuries of their trial-and-error have resulted in four primary techniques for describing 3D shapes on a 2D surface: hachures, contour lines, hypsometric tints, and shaded relief.

These styles utilize lines, shades, and colors to describe topography, and notably not points. Even modern 3D modeling software uses tessellated wireframes instead of floating points as a lightweight rendering technique. To my knowledge, only geographic information systems display anything similar, and that’s only when the user wants to see actual data points.

These anecdotal bits of evidence combine with my observations of these interfaces in Prometheus to convince me that while it’s stylistically unique (and therefore useful to the film makers), it’s seriously suboptimal for real-world adoption.

Alien VPs

In the volumetric projection chapter of Make It So, we note that sci-fi makers take pains to distinguish the virtual from the real most often with a set of visual treatments derived from the “Pepper’s Ghost” parlor trick, augmented with additional technology cues: translucency, a blue tint, glowing whites, supersaturated colors for wireframed objects, clear pixels and/or flicker, with optional projection rays.

Prometheus has four types of VPs that adhere to this style in varying degrees. Individual displays (with their interactions) are discussed in other posts. This collection of posts compares their styles. This particular post describes the alien VPs.

Prometheus-223

The two alien VPs are quite different from the human VPs in appearance and behavior. The first thing to note is that they adhere to the Pepper’s Ghost style more readily, with glowing blue-tinted whites and transparency. Beyond that they differ in precision and implied technology.

Precision VPs

The first style of alien VP appears in the bridge of the alien vessel, where projection technology can be built into the architecture. The resolution is quite precise. When the grapefruit-sized Earth gets close to the camera in one scene, it appears to have infinite resolution, even though this is some teeny tiny percentage of the whole display.

Prometheus-228

Glowing Pollen

The other alien VP tech is made up of small, blue-white voxels that float, move in space, obey some laws of physics, and provide a crude level of resolution. These appear in the caves of the alien complex where display tech is not present in the walls, and again as “security footage” in the bridge of the alien ship. Because the voxels obey some laws of physics, it’s easier to think of them as glowing bits of pollen.

Prometheus-211 Prometheus-140

Pollen behavior

These voxels appear to not be projections of light in space, but actual motes that float through the air. When David activates the “security footage” in the alien complex, a wave of this pollen appears and flows past him. It does not pass through him, but collides with him, each collided mote taking a moment to move around him and regain its roughly-correct position in the display. (How it avoids getting in his mouth is another question entirely.) The motes even produce a gust of wind that disturb David’s bleached coif.

Pollen inaccuracy

The individual lines of pollen follow smooth arcs through the air, but lines appear to be slightly off from one another.

Prometheus-215

This style is beautiful and unique, and conveys a 3D display technology that can move to places even where there’s not a projector in line of sight. The sci-fi makers of this speculative technology use this inaccuracy to distinguish it from other displays. But if a precise understanding of the shapes being described is useful to its viewers, of course it would be better if the voxels were more precisely positioned in space. That’s a minor critique. The main critique of this display is when it gets fed back into the human displays as an arbitrary style, as I’ll discuss in the next post about the human-tech, floating-pixel displays.