Cyberspace: Bulletin Board

Johnny finds he needs a favor from a friend in cyberspace. We see Johnny type something on his virtual keyboard, then selects from a pull down menu.

JM-35-copyshop-Z-animated

A quick break in the action: In this shot we are looking at the real world, not the virtual, and I want to mention how clear and well-defined all the physical actions by actor Keanu Reeves are. I very much doubt that the headset he is wearing actually worked, so he is doing this without being able to see anything.

Will regular users of virtual reality systems be this precise with their gestures? Datagloves have always been expensive and rare, making studies difficult. But several systems offer submillimeter gestural tracking nowadays: version 2 of Microsoft Kinect, Google’s Soli, and Leap Motion are a few, and much cheaper and less fragile than a dataglove. Using any of these for regular desktop application tasks rather than games would be an interesting experiment.

Back in the film, Johnny flies through cyberspace until he finds the bulletin board of his friend. It is an unfriendly glowing shape that Johnny tries to expand or unfold without success.

JM-36-bboard-A-animated

After some more virtual typing, the bulletin board reveals itself as a cube that spins and expands. It doesn’t fill the entire screen, but does reveal the face of Strike, the owner of the bulletin board. His face is stylized as if by a real time image processing filter of the type built into most static image editors today. Strike tells Johnny to go away.

JM-36-bboard-B-animated

Johnny doesn’t give up and the conversation continues. The cube now expands to fill the screen, with Johnny looking into the cube and Strike’s face on the back wall.

JM-36-bboard-C

Johnny raises his hands and makes a threatening gesture, saying that he could crash Strike’s entire system. In cyberspace, his fingertips now have blades.

JM-36-bboard-D-animated

The face retreats in cyberspace, becoming smaller and further away. I’d like to think that Strike leaned back, and that has been mapped into a cyberspace equivalent move. The real world gesture carries its meaning to cyberspace.

A short while ago the Yakuza leader Shinji ordered the tracker to “initiate the virus.” It is at this point that we see the effect, with the cube carrying the image of Strike melting away under a bright light.

JM-36-bboard-E

While visual representations of cyber attacks are common in books and now TV and films, real world computer designers complain that no system under attack would waste processing power on rendering special effects. This is true for the defenders, but the attackers might want to show their power with a flashy display. Or perhaps these visual effects are generated by Johnny’s own cyberspace system, the 2021 equivalent of today’s warning message that a web site certificate cannot be verified. It’s certainly more attention-grabbing than a small padlock icon disappearing from one corner of your browser window.

At this point the Yakuza arrive in reality, and Jane takes the headset off and drags Johnny out of the shop.

 

Cyberspace: Newark Copyshop

The transition from Beijing to the Newark copyshop is more involved. After he travels around a bit, he realizes he needs to be looking back in Newark. He “rewinds” using a pull gesture and sees the copyshop’s pyramid. First there is a predominantly blue window that unfolds as if it were paper.

jm-35-copyshop-a-animated

And then the copyshop initial window expands. Like the Beijing hotel, this is a floor plan view, but unlike the hotel it stays two dimensional. It appears that cyberspace works like the current world wide web, with individual servers for each location that can choose what appearance to present to visitors.

Johnny again selects data records, but not with a voice command. The first transition is a window that not only expands but spins as it does so, and makes a strange jump at the end from the centre to the upper left.

jm-35-copyshop-c-animated

Once again Johnny uses the two-handed expansion gesture to see the table view of the records. Continue reading

Cyberspace: Navigation

Cyberspace is usually considered to be a 3D spatial representation of the Internet, an expansion of the successful 2D desktop metaphor. The representation of cyberspace used in books such as Neuromancer and Snow Crash, and by the film Hackers released in the same year, is an abstract cityscape where buildings represent organisations or individual computers, and this what we see in Johnny Mnemonic. How does Johnny navigate through this virtual city?

Gestures and words for flying

Once everything is connected up, Johnny starts his journey with an unfolding gesture. He then points both fingers forward. From his point of view, he is flying through cyberspace. He then holds up both hands to stop.

jm-31-navigation-animated

Both these gestures were commonly used in the prototype VR systems of 1995. They do however conflict with the more common gestures for manipulating objects in volumetric projections that are described in Make It So chapter 5. It will be interesting to see which set of gestures is eventually adopted, or whether they can co-exist.

Later we will see Johnny turn and bank by moving his hands independently.

jm-31-navigation-f

We also see him using voice commands, saying “hold it” to stop forward motion immediately. Later we see him stretch one arm out and bring it back, apparently reversing a recent move.

jm-31-navigation-e

In cyberpunk and related fiction users fly everywhere in cyberspace, a literal interpretation of the spatial metaphor. This is also how users in our real world MUD and MOO cyberspaces start. After a while, travelling through all the intermediate locations between your start and destination gets tedious. MUDs and MOOs allow teleporting, a direct jump to the desired location, and the cyberspace in Johnny Mnemonic has a similar capability.

Gestures for teleporting

Mid sequence, Johnny wants to jump to the Beijing hotel where the upload took place. To do this, he uses a blue geometric shape at the lower left of his view, looking like a high tech, floating tetrahedron. Johnny slowly spins this virtual object using repeated flicking gestures with his left hand, with his ring and middle fingers held together.

jm-31-navigation-2-animated

It looks very similar to the gesture used on a current-day smartphone to flick through a photo album or set of application icon screens. And in this case, it causes a blue globe to float into view (see below).

Johnny grabs this globe and unfolds it into a fullscreen window, using the standard Hollywood two handed “spread” gesture described in Chapter 5 of Make It So.

jm-32-beijing-a-animated

The final world map fills the entire screen. Johnny uses his left hand to enter a number on a HUD style overlay keypad, then taps on the map to indicate China.

jm-32-beijing-c

jm-32-beijing-d

I interpret this as Johnny using the hotel phone number to specify his destination. It would not be unusual for there to be multiple hotels with the same name within a city such as Beijing, but the phone number should be unique. But since Johnny is currently in North America, he must also specify the international dialing code or 2021 equivalent, which he can do just by pointing. And this is a well-designed user interface which accepts not only multimodal input, but in any order, rather than forcing the user to enter the country code first.

Keyboards and similar physical devices often don’t translate well into virtual reality, because tactile feedback is non-existent. Even touch typists need the feeling of the physical keyboard, in particular the slight concavity of the key tops and the orientation bumps on the F and J keys, to keep their fingers aligned. Here though there is just a small grid of virtual numbers which doesn’t require extended typing. Otherwise this is a good design, allowing Johnny to type a precise number and just point to a larger target.

Next

After he taps a location, the zoomrects indicate a transition into a new cyberspace, in this case, Beijing.

Cyberspace: the hardware

And finally we come to the often-promised cyberspace search sequence, my favourite interface in the film. It starts at 36:30 and continues, with brief interruptions to the outside world, to 41:00. I’ll admit there are good reasons not to watch the entire film, but if you are interested in interface design, this will be five minutes well spent. Included here are the relevant clips, lightly edited to focus on the user interfaces.

Click to see video of The cyberspace search.

Click to see Board conversation, with Pharmakom tracker and virus

First, what hardware is required?

Johnny and Jane have broken into a neighbourhood computer shop, which in 2021 will have virtual reality gear just as today even the smallest retailer has computer mice. Johnny clears miscellaneous parts off a table and then sits down, donning a headset and datagloves.

jm-30-hardware-a

Headset

Headsets haven’t really changed much since 1995 when this film was made. Barring some breakthrough in neural interfaces, they remain the best way to block off the real world and immerse a user into the virtual world of the computer. It’s mildly confusing to a current day audience to hear Johnny ask for “eyephones”, which in 1995 was the name of a particular VR headset rather than the popular “iPhone” of today. Continue reading

Talking to a Puppet

As mentioned, Johnny in the last phone conversation in the van is not talking to the person he thinks he is. The film reveals Takahashi at his desk, using his hand as if he were a sock puppeteer—but there is no puppet. His desk is emitting a grid of green light to track the movement of his hand and arm.

jm-22-puppet-call-c

The Make It So chapter on gestural interfaces suggests Takahashi is using his hand to control the mouth movements of the avatar. I’d clarify this a bit. Lip synching by human animators is difficult even when not done in real time, and while it might be possible to control the upper lip with four fingers, one thumb is not enough to provide realistic motion of the lower lip. Continue reading

Iron Man HUD: A Breakdown

So this is going to take a few posts. You see, the next interface that appears in The Avengers is a video conference between Tony Stark in his Iron Man supersuit and his partner in romance and business, Pepper Potts, about switching Stark Tower from the electrical grid to their independent power source. Here’s what a still from the scene looks like.

Avengers-Iron-Man-Videoconferencing01

So on the surface of this scene, it’s a communications interface.

But that chat exists inside of an interface with a conceptual and interaction framework that has been laid down since the original Iron Man movie in 2008, and built upon with each sequel, one in 2010 and one in 2013. (With rumors aplenty for a fourth one…sometime.)

So to review the video chat, I first have to talk about the whole interface, and that has about 6 hours of prologue occurring across 4 years of cinema informing it. So let’s start, as I do with almost every interface, simply by describing it and its components. Continue reading

Carrier Control

In the second instantiation of videochat with the World Security Council that we see (here’s the first one), when Fury receives their order to bomb the site of the Chitauri portal. He takes this call on the bridge, and rather than a custom hardware setup, this is a series of windows that overlay an ominous-red map of the world in an app called CARRIER CONTROL. These windows represent a built-in chat feature for discussing this very topic. There is some fuigetry on the periphery, but our focus is on these windows and the conversation happening through them.

Avengers-fury-secure-transmission01

In this version of the chat, we are assured that it is a SECURE TRANSMISSION by a legend across the top of each, but there is not the same level of assurance as in the videoconference room. If it’s still HOTP, Fury isn’t notified of it. There’s a tiny 01_AZ in the upper right of every screen, but it never changes and is the same for each participant. (An homage to Arizona? Lighter Andrew Zink? Cameraman Arthur Zajac?) Though this is a more desperate situation, you imagine that the need for security is no less dire. Having that same cypher key would be comforting if it is in fact a policy.

Different sizes of windows in the app seem to indicate a hierarchy, since the largest window is the fellow who does most of the talking in both conferences, and it does not change as others speak. Such an automated layout would spare Fury the hassle of having to manage multiple windows, though visually these look more like individual objects he’s meant to manipulate. Poor affordances.

dismiss

The only control we see is when Fury dismisses them, and to do this he just taps at the middle of the screen. The teleconference window is “push wiped” by a satellite view of New York City. Fine, he feels like punching them. But…

a) How does he actually select something in that interface without a tap?

b) A swipe would have been more meaningful, and in line with the gestural pidgin I identified in the gestural chapter of the book.

And of course, if this was the real world, you’d hope for better affordances for what can be done on this window across the board.

So though mostly effective, narratively, could use some polish.

Dat glaive: Enthrallment

Several times throughout the movie, Loki uses places the point of the glaive on a victim’s chest near their heart, and a blue fog passes from the stone to infect them: an electric blackness creeps upward along their skin from their chest until it reaches their eyes, which turn fully black for a moment before becoming the same ice blue of the glaive’s stone, and we see that the victim is now enthralled into Loki’s servitude.

Enthralling_Hawkeye

You have heart.

The glaive is very, very terribly designed for this purpose. Continue reading

The bug VP

StarshipT_030

In biology class, the (unnamed) professor points her walking stick (she’s blind) at a volumetric projector. The tip flashes for a second, and a volumetric display comes to life. It illustrates for the class what one of the bugs looks like. The projection device is a cylinder with a large lens atop a rolling base. A large black plug connects it to the wall.

The display of the arachnid appears floating in midair, a highly saturated screen-green wireframe that spins. It has very slight projection rays at the cylinder and a "waver" of a scan line that slowly rises up the display. When it initially illuminates, the channels are offset and only unify after a second.

STARSHIP_TROOPERS_vdisplay

StarshipT_029

The top and bottom of the projection are ringed with tick lines, and several tick lines runs vertically along the height of the bug for scale. A large, lavender label at the bottom identifies this as an ARACHNID WARRIOR CLASS. There is another lavendar key too small for us to read.The arachnid in the display is still, though the display slowly rotates around its y-axis clockwise from above. The instructor uses this as a backdrop for discussing arachnid evolution and "virtues."

After the display continues for 14 seconds, it shuts down automatically.

STARSHIP_TROOPERS_vdisplay2

Interaction

It’s nice that it can be activated with her walking stick, an item we can presume isn’t common, since she’s the only apparently blind character in the movie. It’s essentially gestural, though what a blind user needs with a flash for feedback is questionable. Maybe that signal is somehow for the students? What happens for sighted teachers? Do they need a walking stick? Or would a hand do? What’s the point of the flash then?

That it ends automatically seems pointlessly limited. Why wouldn’t it continue to spin until it’s dismissed? Maybe the way she activated it indicated it should only play for a short while, but it didn’t seem like that precise a gesture.

Of course it’s only one example of interaction, but there are so many other questions to answer. Are there different models that can be displayed? How would she select a different one? How would she zoom in and out? Can it display aimations? How would she control playback? There are quite a lot of unaddressed details for an imaginative designer to ponder.

Display

The display itself is more questionable.

Scale is tough to tell on it. How big is that thing? Students would have seen video of it for years, so maybe it’s not such an issue. But a human for scale in the display would have been more immediately recognizable. Or better yet, no scale: Show the thing at 1:1 in the space so its scale is immediately apparent to all the students. And more appropriately, terrifying.

And why the green wireframe? The bugs don’t look like that. If it was showing some important detail, like carapice density, maybe, but this looks pretty even. How about some realistic color instead? Do they think it would scare kids? (More than the “gee-whiz!” girl already is?)

And lastly there’s the title. Yes, having it rotate accomodates viewers in 360 degrees, but it only reads right for half the time. Copy it, flip it 180º on the y-axis, and stack it, and you’ve got the most important textual information readable at most any time from the display.

Better of course is more personal interaction, individual displays or augmented reality where a student can turn it to examine the arachnid themselves, control the zoom, or follow up on more information. (Wnat to know more?) But the school budget in the world of Starship Troopers was undoubtedly stripped to increase military budget (what a crappy world that would be amirite?), and this single mass display might be more cost effective.

Thermoptic camouflage

GitS-thermoptic-03

Kusanagi is able to mentally activate a feature of her skintight bodysuit and hair(?!) that renders her mostly invisible. It does not seem to affect her face by default. After her suit has activated, she waves her hand over her face to hide it. We do not see how she activates or deactivates the suit in the first place. She seems to be able to do so at will. Since this is not based on any existing human biological capacity, a manual control mechanism would need some biological or cultural referent. The gesture she uses—covering her face with open-fingered hands—makes the most sense, since even with a hand it means, “I can see you but you can’t see me.”

In the film we see Ghost Hacker using the same technology embedded in a hooded coat he wears. He activates it by pulling the hood over his head. This gesture makes a great deal of physical sense, similar to the face-hiding gesture. Donning a hood would hide your most salient physical identifier, your face, so having it activate the camouflage is a simple synechdochic extension.

GitS-thermoptics-30

The spider tank also features this same technology on its surface, where we learn it is a delicate surface. It is disabled from a rain of glass falling on it.

GitS-spidertank-01

This tech less than perfect, distorting the background behind it, and occasionally flashing with vigorous physical activity. And of course it cannot hide the effects that the wearer is creating in the environment, as we see with splashes the water and citizens in a crowd being bumped aside.

Since this imperfection runs counter to the wearer’s goal, I’d design a silent, perhaps haptic feedback, to let the wearer know when they’re moving too fast for the suit’s processors to keep up, as a reinforcement to whatever visual effects they themselves are seeing.

UPDATE: When this was originally posted, I used the incorrect concept “metonym” to describe these gestures. The correct term is “synechdoche” and the post has been updated to reflect that.