Following Dr. Brown’s instructions, Marty heads to Café 80s where the waitstaff consists of television screens mounted on articulated arms which are suspended from the ceiling, allowing them to reach anyplace in the café. Each screen has a shelf on which small items can be delivered to a patron. Each screen features a different celebrity from the 1980s, rendered as a computer talking head and done in a jittery Max Headroom style.
Fueling stations are up on a raised platform. Cars can ride or land there and approach a central column. A rotating overhead arm maneuvers a liquid fuel dispensing robot into place near the car while a synthesized voice crudely welcomes the driver, delivers a marketing slogan, and announces its actions, i.e. checking oil, and checking landing gear. Continue reading
After Johnny was mistakenly reported as killed, the next time we see him he is in a healing chamber, submerged in green-underlit translucent fluid, resting on form-fitting clear plastic supports. He breathes through a tube, and a pair of small robot arms work busily to regenerate the damaged tissue in his leg.
The main reason to discuss this chamber on a blog about interfaces is the material choice of the outside of the chamber. By being surrounded completely in a transparent material (glass? plexiglass? transparent aluminum?), it means that physicians can keep an eye on progress, and he can have visual interactions with visitors, as we see when Dizzy and Ace visit to share with him his mistaken death certificate (and for Dizzy to leave him a kiss.) Additionally it gives Johnny something to look at during the long hours of recuperation.
I’m not sure why the green light is necessary. The scene implies that it could serve some part in the healing process, but if not, I wonder if an amber light might signal a more human, nurturing warmth to Johnny and visitors. Narratively, you’d want to avoid anything too yellow or run the risk of the audience’s first interpretations drifting too far to the Andres-Serrano-esque.
The other major benefit to the users of the chair (besides the ease of travel and lifestyle) is the total integration of the occupant’s virtual social life, personal life, fashion (or lack-thereof), and basic needs in one device. Passengers are seen talking with friends remotely, not-so-remotely, playing games, getting updated on news, and receiving basic status updates. The device also serves as a source of advertising (try blue! it’s the new red!).
A slight digression: What are the ads there for? Considering that the Axiom appears to be an all-inclusive permanent resort model, the ads could be an attempt to steer passengers to using resources that the ship knows it has a lot of. This would allow a reprieve for heavily used activities/supplies to be replenished for the next wave of guests, instead of an upsell maneuver to draw more money from them. We see no evidence of exchange of money or other economic activity while on-board the Axiom…
OK, back to the social network.
It isn’t obvious what the form of authentication is for the chairs. We know that the chairs have information about who the passenger prefers to talk to, what they like to eat, where they like to be aboard the ship, and what their hobbies are. With that much information, if there was no constant authentication, an unscrupulous passenger could easily hop in another person’s chair, “impersonate” them on their social network, and play havoc with their network. That’s not right.
It’s possible that the chair only works for the person using it, or only accesses the current passenger’s information from a central computer in the Axiom, but it’s never shown. What we do know is that the chair activates when a person is sitting on it and paying attention to the display, and that it deactivates as soon as that display is cut or the passenger leaves the chair.
We aren’t shown what happens when the passenger’s attention is drawn away from the screen, since they are constantly focused on it while the chair is functioning properly.
If it doesn’t already exist, the hologram should have an easy to push button or gesture that can dismiss the picture. This would allow the passenger to quickly interact with the environment when needed, then switch back to the social network afterwards.
And, for added security in case it doesn’t already exist, biometrics would be easy for the Axiom. Tracking the chair user’s voice, near-field chip, fingerprint on the control arm, or retina scan would provide strong security for what is a very personal activity and device. This system should also have strong protection on the back end to prevent personal information from getting out through the Axiom itself.
Social networks hold a lot of very personal information, and the network should have protections against the wrong person manipulating that data. Strong authentication can prevent both identity theft and social humiliation.
Taking the occupant’s complete attention
While the total immersion of social network and advertising seems dystopian to us (and that’s without mentioning the creepy way the chair removes a passenger’s need for most physical activity), the chair looks genuinely pleasing to its users.
They enjoy it.
But like a drug, their enjoyment comes at the detriment of almost everything else in their lives. There seem to be plenty of outlets on the ship for active people to participate in their favorite activities: Tennis courts, golf tees, pools, and large expanses for running or biking are available but unused by the passengers of the Axiom.
Work with the human need
In an ideal world a citizen is happy, has a mixture of leisure activities, and produces something of benefit to the civilization. In the case of this social network, the design has ignored every aspect of a person’s life except moment-to-moment happiness.
This has parallels in goal driven design, where distinct goals (BNL wants to keep people occupied on the ship, keep them focused on the network, and collect as much information as possible about what everyone is doing) direct the design of an interface. When goal-driven means data driven, then the data being collected instantly becomes the determining factor of whether a design will succeed or fail. The right data goals means the right design. Wrong data goals mean the wrong design.
Instead of just occupying a person’s attention, this interface could have instead been used to draw people out and introduce them to new activities at intervals driven by user testing and data. The Axiom has the information and power, perhaps even the responsibility, to direct people to activities that they might find interesting. Even though the person wouldn’t be looking at the screen constantly, it would still be a continuous element of their day. The social network could have been their assistant instead of their jailer.
One of the characters even exclaims that she “didn’t even know they had a pool!”. Indicating that she would have loved to try it, but the closed nature of the chair’s social network kept her from learning about it and enjoying it. By directing people to ‘test’ new experiences aboard the Axiom and releasing them from its grip occasionally, the social network could have acted as an assistant instead of an attention sink.
Moment-to-moment happiness might have declined, but overall happiness would have gone way up.
The best way for designers to affect the outcome of these situations is to help shape the business goals and metrics of a project. In a situation like this, after the project had launched a designer could step in and point out those moments were a passenger was pleasantly surprised, or clearly in need of something to do, and help build a business case around serving those needs.
The obvious moments of happiness (that this system solves for so well) could then be augmented by serendipitous moments of pleasure and reward-driven workouts.
We must build products for more than just fleeting pleasure
As soon as the Axiom lands back on Earth, the entire passenger complement leaves the ship (and the social network) behind.
It was such a superficial pleasure that people abandoned it without hesitation when they realized that there was something more rewarding to do. That’s a parallel that we can draw to many current products. The product can keep attention for now, but something better will come along and then their users will abandon them.
A company can produce a product or piece of software that fills a quick need and initially looks successful. But, that success falls apart as soon as people realize that they have larger and tougher problems that need solving.
Ideally, a team of designers at BNL would have watched after the initial launch and continued improving the social network. By helping people continue to grow and learn new skills, the social network could have kept the people aboard the Axiom it top condition both mentally and physically. By the time Wall-E came around, and life finally began to return to Earth, the passengers would have been ready to return and rebuild civilization on their own.
To the designers of a real Axiom Social Network: You have the chance to build a tool that can save the world.
We know you like blue! Now it looks great in Red!
The Hover Chair is a ubiquitous, utilitarian, all-purpose assisting device. Each passenger aboard the Axiom has one. It is a mix of a beach-side deck chair, fashion accessory, and central connective device for the passenger’s social life. It hovers about knee height above the deck, providing a low surface to climb into, and a stable platform for travel, which the chair does a lot of.
A Universal Wheelchair
We see that these chairs are used by everyone by the time that Wall-E arrives on the Axiom. From BNL’s advertising though, this does not appear to be the original. One of the billboards on Earth advertising the Axiom-class ships shows an elderly family member using the chair, allowing them to interact with the rest of the family on the ship without issue. In other scenes, the chairs are used by a small number of people relaxing around other more active passengers.
At some point between the initial advertising campaign and the current day, use went from the elderly and physically challenged, to a device used 24/7 by all humans on-board the Axiom. This extends all the way down to the youngest children seen in the nursery, though they are given modified versions to more suited to their age and disposition. BNL shows here that their technology is excellent at providing comfort as an easy choice, but that it is extremely difficult to undo that choice and regain personal control.
But not a perfect interaction
When it refused to give up authority, the Captain wrested control of the Axiom from the artificial intelligence autopilot, Otto. Otto’s body is the helm wheel of the ship and fights back against the Captain. Otto wants to fulfil BNL’s orders to keep the ship in space. As they fight, the Captain dislodges a cover panel for Otto’s off-switch. When the captain sees the switch, he immediately realizes that he can regain control of the ship by deactivating Otto. After fighting his way to the switch and flipping it, Otto deactivates and reverts to a manual control interface for the ship.
The panel of buttons showing Otto’s current status next to the on/off switch deactivates half its lights when the Captain switches over to manual. The dimmed icons are indicating which systems are now offline. Effortlessly, the captain then returns the ship to its proper flight path with a quick turn of the controls.
One interesting note is the similarity between Otto’s stalk control keypad, and the keypad on the Eve Pod. Both have the circular button in the middle, with blue buttons in a semi-radial pattern around it. Given the Eve Pod’s interface, this should also be a series of start-up buttons or option commands. The main difference here is that they are all lit, where the Eve Pod’s buttons were dim until hit. Since every other interface on the Axiom glows when in use, it looks like all of Otto’s commands and autopilot options are active when the Captain deactivates him.
A hint of practicality…
The panel is in a place that is accessible and would be easily located by service crew or trained operators. Given that the Axiom is a spaceship, the systems on board are probably heavily regulated and redundant. However, the panel isn’t easily visible thanks to specific decisions by BNL. This system makes sense for a company that doesn’t think people need or want to deal with this kind of thing on their own.
Once the panel is open, the operator has a clear view of which systems are on, and which are off. The major downside to this keypad (like the Eve Pod) is that the coding of the information is obscure. These cryptic buttons would only be understandable for a highly trained operator/programmer/setup technician for the system. Given the current state of the Axiom, unless the crew were to check the autopilot manual, it is likely that no one on board the ship knows what those buttons mean anymore.
Thankfully, the most important button is in clear English. We know English is important to BNL because it is the language of the ship and the language seen being taught to the new children on board. Anyone who had an issue with the autopilot system and could locate the button, would know which button press would turn Otto off (as we then see the Captain immediately do).
Considering that Buy-N-Large’s mission is to create robots to fill humans’ every need, saving them from every tedious or unenjoyable job (garbage collecting, long-distance transportation, complex integrated systems, sports), it was both interesting and reassuring to see that there are manual over-rides on their mission-critical equipment.
The opposite situation could get a little tricky though. If the ship was in manual mode, with the door closed, and no qualified or trained personnel on the bridge, it would be incredibly difficult for them to figure out how to physically turn the ship back to auto-pilot. A hidden emergency control is useless in an emergency.
Hopefully, considering the heavy use of voice recognition on the ship, there is a way for the ship to recognize an emergency situation and quickly take control. We know this is possible because we see the ship completely take over and run through a Code Green procedure to analyze whether Eve had actually returned a plant from Earth. In that instance, the ship only required a short, confused grunt from the Captain to initiate a very complex procedure.
Security isn’t an issue here because we already know that the Axiom screens visitors to the bridge (the Gatekeeper). By tracking who is entering the bridge using the Axiom’s current systems, the ship would know who is and isn’t allowed to activate certain commands. The Gatekeeper would either already have this information coded in, or be able to activate it when he allowed people into the bridge.
For very critical emergencies, a system that could recognize a spoken ‘off’ command from senior staff or trained technicians on the Axiom would be ideal.
Anti-interaction as Standard Operating Procedure
The hidden door, and the obscure hard-wired off button continue the mission of Buy-N-Large: to encourage citizens to give up control for comfort, and make it difficult to undo that decision. Seeing as how the citizens are more than happy to give up that control at first, it looks like profitable assumption for Buy-N-Large, at least in the short term. In the long term we can take comfort that the human spirit–aided by an adorable little robot–will prevail.
So for BNL’s goals, this interface is fairly well designed. But for the real world, you would want some sort of graceful degradation that would enable qualified people to easily take control in an emergency. Even the most highly trained technicians appreciate clearly labeled controls and overrides so that they can deal directly with the problem at hand rather than fighting with the interface.
After the security ‘bot brings Eve across the ship (with Wall-e in tow), he arrives at the gatekeeper to the bridge. The Gatekeeper has the job of entering information about ‘bots, or activating and deactivating systems (labeled with “1”s and “0”s) into a pedestal keyboard with two small manipulator arms. It’s mounted on a large, suspended shaft, and once it sees the security ‘bot and confirms his clearance, it lets the ‘bot and the pallet through by clicking another, specific button on the keyboard.
The Gatekeeper is large. Larger than most of the other robots we see on the Axiom. It’s casing is a white shell around an inner hardware. This casing looks like it’s meant to protect or shield the internal components from light impacts or basic problems like dust. From the looks of the inner housing, the Gatekeeper should be able to move its ‘head’ up and down to point its eye in different directions, but while Wall-e and the security ‘bot are in the room, we only ever see it rotating around its suspension pole and using the glowing pinpoint in its red eye to track the objects its paying attention to.
When it lets the sled through, it sees Wall-e on the back of the sled, who waves to the Gatekeeper. In response, the Gatekeeper waves back with its jointed manipulator arm. After waving, the Gatekeeper looks at its arm. It looks surprised at the arm movement, as if it hadn’t considered the ability to use those actuators before. There is a pause that gives the distinct impression that the Gatekeeper is thinking hard about this new ability, then we see it waving the arm a couple more times to itself to confirm its new abilities.
The Gatekeeper seems to exist solely to enter information into that pedestal. From what we can see, it doesn’t move and likely (considering the rest of the ship) has been there since the Axiom’s construction. We don’t see any other actions from the pedestal keys, but considering that one of them opens a door temporarily, it’s possible that the other buttons have some other, more permanent functions like deactivating the door security completely, or allowing a non-authorized ‘bot (or even a human) into the space.
An unutilized sentience
The robot is a sentient being, with a tedious and repetitive job, who doesn’t even know he can wave his arm until Wall-e introduces the Gatekeeper to the concept. This fits with the other technology on board the Axiom, with intelligence lacking any correlation to the robot’s function. Thankfully for the robot, he (she?) doesn’t realize their lack of a larger world until that moment.
So what’s the pedestal for?
It still leaves open the question of what the pedestal controls actually do. If they’re all connected to security doors throughout the ship, then the Gatekeeper would have to be tied into the ship’s systems somehow to see who was entering or leaving each secure area.
The pedestal itself acts as a two-stage authentication system. The Gatekeeper has a powerful sentience, and must decide if the people or robots in front of it are allowed to enter the room or rooms it guards. Then, after that decision, it must make a physical action to unlock the door to enter the secure area. This implies a high level of security, which feels appropriate given that the elevator accesses the bridge of the Axiom.
Since we’ve seen the robots have different vision modes, and improvements based on their function, it’s likely that the Gatekeeper can see more into the pedestal interface than the audience can, possibly including which doors each key links to. If not, then as a computer it would have perfect recall on what each button was for. This does not afford a human presence stepping in to take control in case the Gatekeeper has issues (like the robots seen soon after this in the ‘medbay’). But, considering Buy-N-Large’s desire to leave humans out of the loop at each possible point, this seems like a reasonable design direction for the company to take if they wanted to continue that trend.
It’s possible that the pedestal was intended for a human security guard that was replaced after the first generation of spacefarers retired. Another possibility is that Buy-N-Large wanted an obvious sign of security to comfort passengers.
We learn after this scene that the security ‘bot is Otto’s ‘muscle’ and affords some protection. Given that the Security ‘bot and others might be needed at random times, it feels like he would want a way to gain access to the bridge in an emergency. Something like an integrated biometric scanner on the door that could be manually activated (eye scanner, palm scanner, RFID tags, etc.), or even a physical key device on the door that only someone like the Captain or trusted security officers would be given. Though that assumes there is more than one entrance to the bridge.
This is a great showcase system for tours and commercials of an all-access luxury hotel and lifeboat. It looks impressive, and the Gatekeeper would be an effective way to make sure only people who are really supposed to get into the bridge are allowed past the barriers. But, Buy-N-Large seems to have gone too far in their quest for intelligent robots and has created something that could be easily replaced by a simpler, hard-wired security system.
While preparing for his night cycle, Wall-E is standing at the back of his transport/home. On the back drop door of the transport, he is cleaning out his collection cooler. In the middle of this ritual, an alert sounds from his external speakers. Concerned by the sound, Wall-E looks up to see a dust storm approaching. After seeing this, he hurries to finish cleaning his cooler and seal the door of the transport.
A Well Practiced Design
The Dust Storm Alert appears to override Wall-E’s main window into the world: his eyes. This is done to warn him of a very serious event that could damage him or permanently shut him down. What is interesting is that he doesn’t appear to register a visual response first. Instead, we first hear the audio alert, then Wall-E’s eye-view shows the visual alert afterward.
Given the order of the two parts of the alert, the audible part was considered the most important piece of information by Wall-E’s designers. It comes first, is unidirectional as well as loud enough for everyone to hear, and is followed by more explicit information.
Equal Opportunity Alerts
By having the audible alert first, all Wall-E units, other robots, and people in the area would be alerted of a major event. Then, the Wall-E units would be given the additional information like range and direction that they need to act. Either because of training or pre-programmed instructions, Wall-E’s vision does not actually tell him what the alert is for, or what action he should take to be safe. This could also be similar to tornado sirens, where each individual is expected to know where they are and what the safest nearby location is.
For humans interacting alongside Wall-E units each person should have their own heads-up display, likely similar to a Google-glass device. When a Wall-E unit gets a dust storm alert, the human could then receive a sympathetic alert and guidance to the nearest safe area. Combined with regular training and storm drills, people in the wastelands of Earth would then know exactly what to do.
Why Not Network It?
Whether by luck or proper programming, the alert is triggered with just enough time for Wall-E to get back to his shelter before the worst of the storm hits. Given that the alert didn’t trigger until Wall-E was able to see the dust cloud for himself, this feels like very short notice. Too short notice. A good improvement to the system would be a connection up to a weather satellite in orbit, or a weather broadcast in the city. This would allow him to be pre-warned and take shelter well before any of the storm hits, protecting him and his solar collectors.
Other than this, the alert system is effective. It warns Wall-E of the approaching storm in time to act, and it also warns everyone in the local vicinity of the same issue. While the alert doesn’t inform everyone of what is happening, at least one actor (Wall-E) knows what it means and knows how to react. As with any storm warning system, having a connection that can provide forecasts of potentially dangerous weather would be a huge plus.
The Axiom Return Vehicle’s (ARV’s) first job is to drop off Eve and activate her for her mission on Earth. The ARV acts as the transport from the Axiom, landing on the surface of Earth to drop off Eve pods, then returning after an allotted time to retrieve the pods and return them to the Axiom.
The ARV drops Eve at the landing site by Wall-E’s home, then pushes a series of buttons on her front chest. The buttons light up as they’re pushed, showing up blue just after the arm clicks them. At the end of the button sequence, Eve wakes up and immediately begins scanning the ground directly in front of her. She then continues scanning the environment, leaving the ARV to drop off more Eve Pods elsewhere.
If It Ain’t Broke…
There’s an oddity in ARV’s use of such a crude input device to activate Eve. On first appearance, it seems like it’s a system that is able to provide a backup interface for a human user, allowing Eve to be activated by a person on the ground in the event of an AI failure, or a human-led research mission. But this seems awkward in use because Eve’s front contains no indication of what the buttons each do, or what sequence is required.
A human user of the system would be required to memorize the proper sequence as a physical set of relationships. Without more visual cues, it would be incredibly easy for the person in that situation to push the wrong button to start with, then continue pushing wrong buttons without realizing it (unless they remembered what sound the first button was supposed to make, but then they have one /more/ piece of information to memorize. It just spirals out of control from there).
What was originally for people is now best used by robots.
So if it’s not for humans, what’s going on? Looking at it, the minimal interface has strong hints of originally being designed for legacy support: large physical buttons, coded interface, and tilted upward for a person standing above it. BNL shows a strong desire to design out people, but leave interactions (see The Gatekeeper). This style of button interface looks like a legacy control kept by BNL because by the time people weren’t needed in the system anymore, the automated technology had already been adapted for the same situation.
Large hints to this come from the labels. Each label is an abstract symbol, with the keys grouped into two major areas (the radial selector on the top, and the line of large squares on the bottom). For highly trained technicians meant to interact only rarely with an Eve pod, these cryptic labels would either be memorized or referenced in a maintenance manual. For BNL, the problem would only appear after both the technicians and the manual are gone.
It’s an interface that sticks around because it’s more expensive to completely redo a piece of technology than simply iterate it.
Despite the information hurdles, the physical parts of this interface look usable. By angling the panel they make it easier to see the keypad from a standing position, and the keys are large enough to easily press without accidentally landing on the wrong one. The feedback is also excellent, with a physical depression, a tactile click, and a backlight that trails slightly to show the last key hit for confirmation.
If I were redesigning this I would bring in the ability for a basic- or intermediate-skill technician to use this keypad quickly. An immediate win would be labeling the keys on the panel with their functions, or at least their position in the correct activation sequence. Small hints would make a big difference for a technician’s memory.
To improve it even more, I would bring in the holographic technology BNL has shown elsewhere. With an overlay hologram, the pod itself could display real-time assistance, of the right sequence of keypresses for whatever function the technician needed.
This small keypad continues to build on the movie’s themes of systems that evolve: Wall-E is still controllable and serviceable by a human, but Eve from the very start has probably never even seen a human being. BNL has automated science to make it easier on their customers.
Gort is one of the most well known film robots from the 1950s. (He predates the most well-known robot, Robbie, by about 5 years.) His silent imperviousness, menacing slowness, and awesome disintegration ray make him an intimidating puzzle to the characters that face him. (“Him?” you may be wondering. The gender is apparent from the original script.) Klaatu explains that Gort was created as part of an interplanetary police force, there to ensure “complete elimination of aggression.” Klaatu explains that upon witnessing violence robots like Gort “act automatically against the aggressor,” though this behavior can be overridden. In this role Gort acts more like an independent character than a computer. Still, he is a robot, and dealings with Gort involve three interfaces: voice control, his visor, and something akin to Aldis lamp Morse code.
Gort emerges when Klaatu is wounded by a nervous and hair-triggered soldier. Gort eliminates the immediate threats with his disintegrator ray and seems intent on killing the tank commander when Klaatu issues a command in an alien language, “Gort! Deglet ovrosco!” Immediately after hearing the instruction Gort remains motionless. Gort obeys this order until Klaatu gives him another signal by a light code, discussed below.
Gort is not just keyed to Klaatu’s voice. When Helen approaches Gort, he begins to attack her. When she speaks the words, “Klaatu barada nikto” (Yes, that Klaatu barada nikto) to Gort, he ceases his attack and carries her into the heart of the spaceship, where she is imprisoned and protected until Gort fetches and revives Klaatu. It is clearly just the words that Gort responds to, and not the speaker. This seems like a pretty big security flaw. Can any criminal issue this command and get off scot-free? Learn the Gortian command for “shoot to kill” and suddenly your protector is your assassin? This brings us, as so many things do in sci-fi, to multifactor authentication. I’ll just leave that there.
Gort’s disintegrator ray emerges from a visor slot on his head. Gort must raise the visor before using the weapon. When armed, a small light illuminates and cyclically scans left and right in the visor space. These two modes, i.e. the visor’s being up and the light, act as increasingly escalated signals to any observers of the seriousness of the situation.
The army intuitively understands the meaning of this signal even having never before experienced it. They all back away in fear, and rightly so. As such the visor acts as a signal of the readiness of a very dangerous weapon.
One night Klaatu sneaks from the boarding house back to the spaceship, around which the army has placed guards and a barrier. Klaatu finds a viewing window in the barrier, but Gort is facing away from it. To get Gort’s attention silently, Klaatu uses a flashlight to shine a series of Morse-code-like* signals onto a wall that Gort faces. In response, Gort turns to the source of the light. Klaatu continues to signal Gort directly on his visor. In this way Klaatu reactivates Gort.
This sequence implies that there are a series of channels by which Gort could be signaled, each allowing for a different constraint. Though codes like the Aldis code have a steep learning curve, and might not be recommended for more intermediate users, they clearly have their uses in mission-critical systems that are prone to the chaos of landing on alien worlds.
*It’s not real Morse code since the third “letter” is 8 “dots,” way beyond the maximum 5 defined in Morse.