Viper Controls


The Viper is the primary space fighter of the Colonial Fleet.  It comes in several varieties, from the Mark II (shown above), to the Mark VII (the latest version).  Each is made for a single pilot, and the controls allow the pilot to navigate short distances in space to dogfight with enemy fighters.


Mark II Viper Cockpit

The Mark II Viper is an analog machine with a very simple Dradis, physical gauges, and paper flight plans.  It is a very old system.  The Dradis sits in the center console with the largest screen real-estate.  A smaller needle gauge under the Dradis shows fuel levels, and a standard joystick/foot pedal system provides control over the Viper’s flight systems.


Mark VII Viper Cockpit

The Viper Mk VII is a mostly digital cockpit with a similar Dradis console in the middle (but with a larger screen and more screen-based controls and information).  All other displays are digital screens.  A few physical buttons are scattered around the top and bottom of the interface.  Some controls are pushed down, but none are readable.  Groups of buttons are titled with text like “COMMS CIPHER” and “MASTER SYS A”.

Eight buttons around the Dradis console are labeled with complex icons instead of text.

image07 image08

When the Mk VII Vipers encounter Cylons for the first time, the Cylons use a back-door computer virus to completely shut down the Viper’s systems.  The screens fuzz out in the same manner as when Apollo gets caught in an EMP burst.

The Viper Mk VII is then completely uncontrollable, and the pilot’s’ joystick-based controls cease to function.

Overall, the Viper Mk II is set up similarly to a WWII P-52 Mustang or early production F-15 Eagle, while the Viper Mk VII is similar to a modern-day F-16 Falcon or F-22 Raptor .


Usability Concerns

The Viper is a single seat starfighter, and appears to excel in that role.  The pilots focus on their ship, and the Raptor pilots following them focus on the big picture.  But other items, including color choice, font choice, and location are an issue.

Otherwise, Items appear a little small, and it requires a lot of training to know what to look for on the dashboards. Also, the black lines radiating from the large grouper labels appear to go nowhere and provide no extra context or grouping.  Additionally, the controls (outside of the throttle and joystick) require quite a bit of reach from the seat.

Given that the pilots are accelerating at 9+ gs, reaching a critical control in the middle of a fight could be difficult.  Hopefully, the designers of the Vipers made sure that ‘fighting’ controls are all within arms reach of the seat, and that the controls requiring more effort are secondary tasks.

Similarly, all-caps text is the hardest to read at a glance, and should be avoided for interfaces like the Viper that require quick targeting and actions in the middle of combat.  The other text is very small, and it would be worth doing a deeper evaluation in the cockpit itself to determine if the font size is too small to read from the seat.

If anyone reading this blog has an accurate Viper cockpit prop, we’d be happy to review it! 

Fighter pilots in the Battlestar Galactica universe have quick reflexes, excellent vision, and stellar training.  They should be allowed to use all of those abilities for besting Cylons in a dogfight, instead of being forced to spend time deciphering their Viper’s interface.

Damage Control


After the Galactica takes a nuclear missile hit to its port launch bay, part of the CIC goes into Damage Control mode.  Chief Tyrol and another officer take up a position next to a large board with a top-down schematic of the Galactica.  The board has various lights in major sections of the ship representing various air-tight modules in the ship.  


After the nuclear hit, the port launch bay is venting to space, bulkheads are collapsing in due to the damage, and there are uncontrolled fires.  In those blocks, the lights blink red.


Colonel Tigh orders the red sections sealed off and vented to space.  When Tigh turns his special damage control key in the “Master Vent” control, the lights disappear until the areas are sealed off again.  When the fires go out and the master vents are closed, the lights return to a green state.

On the board then, the lights have three states:

  • Green: air-tight, healthy
  • Blinking Red: Fire
  • Off: Intentional Venting

There does not appear to be any indications of the following states:

  • Damage Control Teams in the area
  • Open to space/not air-tight

We also do not see how sections are chosen to be vented.

Why it works

The most effective pieces here are the red lights and the “vent” key.  Chief Tyrol has a phone to talk to local officers managing the direct crisis, and can keep a basic overview of the problems on the ship (with fire being the most dangerous) with the light board.  The “vent” key is likewise straightforward, and has a very clear “I’m about to do something dangerous” interaction.

What is confusing are the following items:

  • How does Chief Tyrol determine which phone/which officer he’s calling?
  • Who is the highest ranking officer in the area?
  • How does the crew determine which sections they’re going to vent?
  • How do they view more complex statuses besides “this section is on fire”?

As with other systems on the Galactica, the board could be improved with the use of more integrated systems like automatic sensors, display screens to cycle through local cameras, and tracking systems for damage control crew.  Also as with other systems on the Galactica, these were deliberate omissions to prevent the Cylons from being able to control the Galactica.

One benefit of the simplified system is that it keeps Chief Tyrol thinking of the high-level problem instead of trying to micromanage his local damage control teams.  With proper training, local teams with effective leadership and independent initiative are more effective than a large micro-managed organization.  Chief Tyrol can focus on the goals he needs his teams to accomplish:

  • Putting out fires
  • Evacuating local crew
  • Protecting the ship from secondary explosions

and allow his local teams to focus on the tactics of each major goal.

What it’s missing

A glaring omission here is the lack of further statuses.  In the middle of a crisis, Chief Tyrol could easily lose track of individual teams on his ship.  He knows the crews that are in the Port Hangar Bay, but we never hear about the other damage control teams and where they are.  Small reminders or other status indicators would keep the Chief from needing to remember everything that was happening across the ship.  Even a box of easily-grabbed sticky notes or a grease-pen board would help here and be very low-tech.

Possible indicators include:

  • Secondary lights in each section when a damage control crew was in the area
  • A third color indicator (less optimal, but would take up less space on the board)
  • A secondary board with local reports of damage crew location and progress
  • Radiation alarms
  • Extreme temperatures
  • Low oxygen states
  • High oxygen states (higher fire risk)
  • Structural damage

It is also possible that Colonel Tigh would have taken the local crews into consideration when making his decision if he could have seen where they were for himself on the board, instead of simply hearing Chief Tyrol’s protests about their existence. Reducing feedback loops can make decision making less error prone and faster, but can admittedly introduce single points of failure.

Colonel Tigh and Chief Tyrol are able to get control of the situation with the tools at hand, but minor upgrades could have lessened the stress of the situation and allowed both of them to think clearer before jumping to decisions.  Better systems would have given them all the information they needed, but the Galactica’s purpose limited them for the benefit of the entire ship.

FTL – Engine Analysis


The FTL Jump process on the Galactica has several safeguards, all appropriate for a ship of that size and an action of that danger (late in the series, we see that an inappropriate jump can cause major damage to nearby objects).  Only senior officers can start the process, multiple teams all sign off on the calculations, and dedicated computers are used for potentially damaging computations.

Even the actual ‘jump’ requires a two stage process with an extremely secure key and button combination.  It is doubtful that Lt. Gaeta’s key could be used on any other ship aside from the Galactica.

The process is so effective, and the crew is so well trained at it, that even after two decades of never actually using the FTL system, the Galactica is able to make a pinpoint jump under extreme duress (the beginning of human extinction).

Difficult Confirmation


The one apparent failure in this system is the confirmation process after the FTL jump.  Lt. Gaeta has to run all the way across the CIC and personally check a small screen with less than obvious information.

Of the many problems with the nav’s confirmation screen, three stand out:

  • It is a 2d representation of 3d space, without any clear references to how information has been compacted
  • There are no ‘local zero’ showing the system’s plane or relative inclination of orbits
  • No labels on data

Even the most basic orbital navigation system has a bit more information about Apogee, Perigee, relative orbit, and a gimbal reading. Compare to this chart from the Kerbal Space Program:


(from and Kerbal Space Program)

The Galactica would need at least this much information to effectively confirm their location.  For Lt. Gaeta, this isn’t a problem because of his extensive training and knowledge of the Galactica.  

But the Galactica is a warship and would be expected to experience casualties during combat.  Other navigation officers and crew may not be as experienced or have the same training as Lt. Gaeta.  In a situation where he is incapacitated and it falls to a less experienced member of the crew, an effective visual display of location and vector is vital.

Simplicity isn’t always perfect

This is an example of where a bit more information in the right places can make an interface more legible and understandable.  Some information here looks useless, but may be necessary for the Galactica’s navigation crew.  With the extra information, this display could become useful for crew other than Lt. Gaeta.

Grabby hologram

After Pepper tosses off the sexy bon mot “Work hard!” and leaves Tony to his Avengers initiative homework, Tony stands before the wall-high translucent displays projected around his room.

Amongst the videos, diagrams, metadata, and charts of the Tesseract panel, one item catches his attention. It’s the 3D depiction of the object, the tesseract itself, one of the Infinity Stones from the MCU. It is a cube rendered in a white wireframe, glowing cyan amidst the flat objects otherwise filling the display. It has an intense, cold-blue glow at its center.  Small facing circles surround the eight corners, from which thin cyan rule lines extend a couple of decimeters and connect to small, facing, inscrutable floating-point numbers and glyphs.


Wanting to look closer at it, he reaches up and places fingers along the edge as if it were a material object, and swipes it away from the display. It rests in his hand as if it was a real thing. He studies it for a minute and flicks his thumb forward to quickly switch the orientation 90° around the Y axis.

Then he has an Important Thought and the camera cuts to Agent Coulson and Steve Rogers flying to the helicarrier.

So regular readers of this blog (or you know, fans of blockbuster sci-fi movies in general) may have a Spidey-sense that this feels somehow familiar as an interface. Where else do we see a character grabbing an object from a volumetric projection to study it? That’s right, that seminal insult-to-scientists-and-audiences alike, Prometheus. When David encounters the Alien Astrometrics VP, he grabs the wee earth from that display to nuzzle it for a little bit. Follow the link if you want that full backstory. Or you can just look and imagine it, because the interaction is largely the same: See display, grab glowing component of the VP and manipulate it.

Prometheus-229 Two anecdotes are not yet a pattern, but I’m glad to see this particular interaction again. I’m going to call it grabby holograms (capitulating a bit on adherence to the more academic term volumetric projection.) We grow up having bodies and moving about in a 3D world, so the desire to grab and turn objects to understand them is quite natural. It does require that we stop thinking of displays as untouchable, uninterruptable movies and more like toy boxes, and it seems like more and more writers are catching on to this idea.

More graphics or more information?

Additionally,  the fact that this object is the one 3D object in its display is a nice affordance that it can be grabbed. I’m not sure whether he can pull the frame containing the JOINT DARK ENERGY MISSION video to study it on the couch, but I’m fairly certain I knew that the tesseract was grabbable before Tony reached out.

On the other hand, I do wonder what Tony could have learned by looking at the VP cube so intently. There’s no information there. It’s just a pattern on the sides. The glow doesn’t change. The little glyph sticks attached to the edges are fuigets. He might be remembering something he once saw or read, but he didn’t need to flick it like he did for any new information. Maybe he has flicked a VP tesseract in the past?

Augmented “reality”

Rather, I would have liked to have seen those glyph sticks display some useful information, perhaps acting as leaders that connected the VP to related data in the main display. One corner’s line could lead to the Zero Point Extraction chart. Another to the lovely orange waveform display. This way Tony could hold the cube and glance at its related information. These are all augmented reality additions.

Augmented VP

Or, even better, could he do some things that are possible with VPs that aren’t possible with AR. He should be able to scale it to be quite large or small. Create arbitrary sections, or plan views. Maybe fan out depictions of all objects in the SHIELD database that are similarly glowy, stone-like, or that remind him of infinity. Maybe…there’s…a…connection…there! Or better yet, have a copy of JARVIS study the data to find correlations and likely connections to consider. We’ve seen these genuine VP interactions plenty of places (including Tony’s own workshop), so they’re part of the diegesis.

Avengers_PullVP-05.pngIn any case, this simple setup works nicely, in which interaction with a cool media helps underscore the gravity of the situation, the height of the stakes. Note to selves: The imperturbable Tony Stark is perturbed. Shit is going to get real.


The HoverChair Social Network


The other major benefit to the users of the chair (besides the ease of travel and lifestyle) is the total integration of the occupant’s virtual social life, personal life, fashion (or lack-thereof), and basic needs in one device. Passengers are seen talking with friends remotely, not-so-remotely, playing games, getting updated on news, and receiving basic status updates. The device also serves as a source of advertising (try blue! it’s the new red!).

A slight digression: What are the ads there for? Considering that the Axiom appears to be an all-inclusive permanent resort model, the ads could be an attempt to steer passengers to using resources that the ship knows it has a lot of. This would allow a reprieve for heavily used activities/supplies to be replenished for the next wave of guests, instead of an upsell maneuver to draw more money from them. We see no evidence of exchange of money or other economic activity while on-board the Axiom

OK, back to the social network.


It isn’t obvious what the form of authentication is for the chairs. We know that the chairs have information about who the passenger prefers to talk to, what they like to eat, where they like to be aboard the ship, and what their hobbies are. With that much information, if there was no constant authentication, an unscrupulous passenger could easily hop in another person’s chair, “impersonate” them on their social network, and play havoc with their network. That’s not right.

It’s possible that the chair only works for the person using it, or only accesses the current passenger’s information from a central computer in the Axiom, but it’s never shown. What we do know is that the chair activates when a person is sitting on it and paying attention to the display, and that it deactivates as soon as that display is cut or the passenger leaves the chair.

We aren’t shown what happens when the passenger’s attention is drawn away from the screen, since they are constantly focused on it while the chair is functioning properly.

If it doesn’t already exist, the hologram should have an easy to push button or gesture that can dismiss the picture. This would allow the passenger to quickly interact with the environment when needed, then switch back to the social network afterwards.

And, for added security in case it doesn’t already exist, biometrics would be easy for the Axiom. Tracking the chair user’s voice, near-field chip, fingerprint on the control arm, or retina scan would provide strong security for what is a very personal activity and device. This system should also have strong protection on the back end to prevent personal information from getting out through the Axiom itself.

Social networks hold a lot of very personal information, and the network should have protections against the wrong person manipulating that data. Strong authentication can prevent both identity theft and social humiliation.

Taking the occupant’s complete attention

While the total immersion of social network and advertising seems dystopian to us (and that’s without mentioning the creepy way the chair removes a passenger’s need for most physical activity), the chair looks genuinely pleasing to its users.

They enjoy it.

But like a drug, their enjoyment comes at the detriment of almost everything else in their lives. There seem to be plenty of outlets on the ship for active people to participate in their favorite activities: Tennis courts, golf tees, pools, and large expanses for running or biking are available but unused by the passengers of the Axiom.

Work with the human need

In an ideal world a citizen is happy, has a mixture of leisure activities, and produces something of benefit to the civilization. In the case of this social network, the design has ignored every aspect of a person’s life except moment-to-moment happiness.

This has parallels in goal driven design, where distinct goals (BNL wants to keep people occupied on the ship, keep them focused on the network, and collect as much information as possible about what everyone is doing) direct the design of an interface. When goal-driven means data driven, then the data being collected instantly becomes the determining factor of whether a design will succeed or fail. The right data goals means the right design. Wrong data goals mean the wrong design.

Instead of just occupying a person’s attention, this interface could have instead been used to draw people out and introduce them to new activities at intervals driven by user testing and data. The Axiom has the information and power, perhaps even the responsibility, to direct people to activities that they might find interesting. Even though the person wouldn’t be looking at the screen constantly, it would still be a continuous element of their day. The social network could have been their assistant instead of their jailer.

One of the characters even exclaims that she “didn’t even know they had a pool!”. Indicating that she would have loved to try it, but the closed nature of the chair’s social network kept her from learning about it and enjoying it. By directing people to ‘test’ new experiences aboard the Axiom and releasing them from its grip occasionally, the social network could have acted as an assistant instead of an attention sink.


Moment-to-moment happiness might have declined, but overall happiness would have gone way up.

The best way for designers to affect the outcome of these situations is to help shape the business goals and metrics of a project. In a situation like this, after the project had launched a designer could step in and point out those moments were a passenger was pleasantly surprised, or clearly in need of something to do, and help build a business case around serving those needs.

The obvious moments of happiness (that this system solves for so well) could then be augmented by serendipitous moments of pleasure and reward-driven workouts.

We must build products for more than just fleeting pleasure


As soon as the Axiom lands back on Earth, the entire passenger complement leaves the ship (and the social network) behind.

It was such a superficial pleasure that people abandoned it without hesitation when they realized that there was something more rewarding to do. That’s a parallel that we can draw to many current products. The product can keep attention for now, but something better will come along and then their users will abandon them.


A company can produce a product or piece of software that fills a quick need and initially looks successful. But, that success falls apart as soon as people realize that they have larger and tougher problems that need solving.

Ideally, a team of designers at BNL would have watched after the initial launch and continued improving the social network. By helping people continue to grow and learn new skills, the social network could have kept the people aboard the Axiom it top condition both mentally and physically. By the time Wall-E came around, and life finally began to return to Earth, the passengers would have been ready to return and rebuild civilization on their own.

To the designers of a real Axiom Social Network: You have the chance to build a tool that can save the world.

We know you like blue! Now it looks great in Red!

The Hover Chair


The Hover Chair is a ubiquitous, utilitarian, all-purpose assisting device. Each passenger aboard the Axiom has one. It is a mix of a beach-side deck chair, fashion accessory, and central connective device for the passenger’s social life. It hovers about knee height above the deck, providing a low surface to climb into, and a stable platform for travel, which the chair does a lot of.

A Universal Wheelchair

We see that these chairs are used by everyone by the time that Wall-E arrives on the Axiom. From BNL’s advertising though, this does not appear to be the original. One of the billboards on Earth advertising the Axiom-class ships shows an elderly family member using the chair, allowing them to interact with the rest of the family on the ship without issue. In other scenes, the chairs are used by a small number of people relaxing around other more active passengers.

At some point between the initial advertising campaign and the current day, use went from the elderly and physically challenged, to a device used 24/7 by all humans on-board the Axiom. This extends all the way down to the youngest children seen in the nursery, though they are given modified versions to more suited to their age and disposition. BNL shows here that their technology is excellent at providing comfort as an easy choice, but that it is extremely difficult to undo that choice and regain personal control.

But not a perfect interaction

We see failure from the passengers’ total reliance on the chairs when one of them (John) falls out of his chair trying to hand an empty drink cup to Wall-E. The chair shuts down, and John loses his entire connection to the ship. Because of his reliance on the chair, he’s not even able to pull himself back up and desperately reaches for the kiosk-bots for assistance.


This reveals the main flaw of the chair: Buy-N-Large’s model of distinct and complete specialization in robot roles has left the chair unable to help its passenger after the passenger leaves the chair’s seat. The first responders—the kiosk bots—can’t assist either (though this is due to programming, not capability…we see them use stasis/tractor beams in another part of the ship). Who or what robot the kiosk-bots are waiting for is never revealed, but we assume that there is some kind of specialized medical assistance robot specifically designed to help passengers who have fallen out of their chairs.

If these chairs were initially designed for infirm passengers, this would make sense; but the unintended conscription of the chair technology by the rest of the passengers was unforeseen by its original designers. Since BNL focused on specialization and fixed purpose, the ship was unable to change its programming to assist the less disabled members of the population without invoking the rest of the chair’s emergency workflow.

John reaching for help from the Kiosk-bots makes it appear that he either has seen the kiosk-bot use its beams before (so he knows it has the capability to help, if not the desire), or he pays so little attention to the technology that he assumes that any piece of the ship should be able to assist with anything he needs.

Whether he’s tech literate or tech insensitive and just wants things to work like magic as they do on the rest of the ship. The system is failing him and his mental model of the Axiom.

Make it ergonomic in every situation


Considering that the chairs already hover, and we know Buy-N-Large can integrate active tractor beams in robot design, it would have been better to have a chair variant that allowed the passenger to be in a standing position inside the chair while it moved throughout the ship. It would then look like a chariot or a full-body exo-skeleton.

This would allow people who may not be able to stand (either due to disability or medical condition) to still participate in active sports like tennis or holo-golf. It would also allow more maneuverability in the chair, allowing it to easily rotate to pick up a fallen passenger and reposition them in a more comfortable spot, even if they needed medical attention.

This would allow immobilization in the case of a serious accident, giving the medical-bot more time to arrive and preventing the passenger from injuring themselves attempting to rescue themselves.

The chair has been designed to be as appealing to a low-activity user as possible. But when technology exists, and is shown to be relatively ubiquitous across different robot types, it should be integrated at the front line where people will need it. Waiting for a medical bot when the situation doesn’t demand a medical response is overly tedious and painful for the user. By using technology already seen in wide use, the chair could be improved to assist people in living an active lifestyle even in the face of physical disabilities.

Otto’s Manual Control



When it refused to give up authority, the Captain wrested control of the Axiom from the artificial intelligence autopilot, Otto. Otto’s body is the helm wheel of the ship and fights back against the Captain. Otto wants to fulfil BNL’s orders to keep the ship in space. As they fight, the Captain dislodges a cover panel for Otto’s off-switch. When the captain sees the switch, he immediately realizes that he can regain control of the ship by deactivating Otto. After fighting his way to the switch and flipping it, Otto deactivates and reverts to a manual control interface for the ship.

The panel of buttons showing Otto’s current status next to the on/off switch deactivates half its lights when the Captain switches over to manual. The dimmed icons are indicating which systems are now offline. Effortlessly, the captain then returns the ship to its proper flight path with a quick turn of the controls.

One interesting note is the similarity between Otto’s stalk control keypad, and the keypad on the Eve Pod. Both have the circular button in the middle, with blue buttons in a semi-radial pattern around it. Given the Eve Pod’s interface, this should also be a series of start-up buttons or option commands. The main difference here is that they are all lit, where the Eve Pod’s buttons were dim until hit. Since every other interface on the Axiom glows when in use, it looks like all of Otto’s commands and autopilot options are active when the Captain deactivates him.

A hint of practicality…

The panel is in a place that is accessible and would be easily located by service crew or trained operators. Given that the Axiom is a spaceship, the systems on board are probably heavily regulated and redundant. However, the panel isn’t easily visible thanks to specific decisions by BNL. This system makes sense for a company that doesn’t think people need or want to deal with this kind of thing on their own.

Once the panel is open, the operator has a clear view of which systems are on, and which are off. The major downside to this keypad (like the Eve Pod) is that the coding of the information is obscure. These cryptic buttons would only be understandable for a highly trained operator/programmer/setup technician for the system. Given the current state of the Axiom, unless the crew were to check the autopilot manual, it is likely that no one on board the ship knows what those buttons mean anymore.


Thankfully, the most important button is in clear English. We know English is important to BNL because it is the language of the ship and the language seen being taught to the new children on board. Anyone who had an issue with the autopilot system and could locate the button, would know which button press would turn Otto off (as we then see the Captain immediately do).

Considering that Buy-N-Large’s mission is to create robots to fill humans’ every need, saving them from every tedious or unenjoyable job (garbage collecting, long-distance transportation, complex integrated systems, sports), it was both interesting and reassuring to see that there are manual over-rides on their mission-critical equipment.

…But hidden

The opposite situation could get a little tricky though. If the ship was in manual mode, with the door closed, and no qualified or trained personnel on the bridge, it would be incredibly difficult for them to figure out how to physically turn the ship back to auto-pilot. A hidden emergency control is useless in an emergency.

Hopefully, considering the heavy use of voice recognition on the ship, there is a way for the ship to recognize an emergency situation and quickly take control. We know this is possible because we see the ship completely take over and run through a Code Green procedure to analyze whether Eve had actually returned a plant from Earth. In that instance, the ship only required a short, confused grunt from the Captain to initiate a very complex procedure.

Security isn’t an issue here because we already know that the Axiom screens visitors to the bridge (the Gatekeeper). By tracking who is entering the bridge using the Axiom’s current systems, the ship would know who is and isn’t allowed to activate certain commands. The Gatekeeper would either already have this information coded in, or be able to activate it when he allowed people into the bridge.

For very critical emergencies, a system that could recognize a spoken ‘off’ command from senior staff or trained technicians on the Axiom would be ideal.

Anti-interaction as Standard Operating Procedure


The hidden door, and the obscure hard-wired off button continue the mission of Buy-N-Large: to encourage citizens to give up control for comfort, and make it difficult to undo that decision. Seeing as how the citizens are more than happy to give up that control at first, it looks like profitable assumption for Buy-N-Large, at least in the short term. In the long term we can take comfort that the human spirit–aided by an adorable little robot–will prevail.

So for BNL’s goals, this interface is fairly well designed. But for the real world, you would want some sort of graceful degradation that would enable qualified people to easily take control in an emergency. Even the most highly trained technicians appreciate clearly labeled controls and overrides so that they can deal directly with the problem at hand rather than fighting with the interface.

The Lifeboat Controls


After Wall-E and Eve return to the Axiom, Otto steals the Earth plant and has his security bot place it on a lifeboat for removal from the ship. Wall-E follows the plant onboard the pod, and is launched from the Axiom when the security bot remotely activates the pod. The Pod has an autopilot function (labeled an auto-lock, and not obviously sentient), and a Self-Destruct function, both of which the security bot activates at launch. Wall-E first tries to turn the auto-pilot off by pushing the large red button on the control panel. This doesn’t work.


Wall-E then desperately tries to turn off the auto-destruct by randomly pushing buttons on the pod’s control panel. He quickly gives up as the destruct continues counting down and he makes no progress on turning it off. In desperation, Wall-E grabs a fire extinguisher and pulls the emergency exit handle on the main door of the pod to escape.

The Auto-Destruct

There are two phases of display on the controls for the Auto-Destruct system: off and countdown. In its off mode, the area of the display dedicated to the destruct countdown is plain and blue, with no label or number. The large physical button in the center is unlit and hidden, flush with the console. There is no indication of which sequence of keypresses activates the auto-destruct.

When it’s on, the area turns bright red, with a pulsing countdown in large numbers, a large ‘Auto-Destruct’ label on the left. The giant red pushbutton in the center is elevated above the console, surrounded by hazard striping, and lit from within.


The odd part is that when the button in the center gets pushed down, nothing happens. This is the first thing Wall-E does to turn the system off, and it’s has every affordance for being a button to stop the auto-destruct panel in which it sits. It’s possible that this center button is really just a pop-up alert light to add immediacy to the audible and other visual cues of impending destruction.

If so, the pod’s controls are seriously inadequate.

Wall-E wants to shut the system off, and the button is the most obvious choice for that action. Self-destruction is an irreversible process (even more so than the typical ‘ejector seat’ controls that Alan Cooper likes to talk about). If accidentally activated, it is something that needs to be immediately shut off. It is also something that would cause panicked decision making in the escape pod’s users.

The blinking button in the center of the control area is the best and most obvious target to “SHUT IT OFF NOW!”

Of course this is just part of the fish-out-of-water humor of the scene, but is there a real reason it’s not responding like it obviously should? One possibility is that the pod is running an authority scan of all the occupants (much like the Gatekeeper for the bridge or what I suggested for Eve’s gun), and is deciding that Wall-E isn’t cleared to use that control. If so, that kind of biometric scanning should be disabled for a control like the Anti-Auto-Destruct. None of the other controls (up to and including the airlock door exit) are disabled in the same way, which causes serious cognitive dissonance for Wall-E.

The Axiom is able to defend itself from anyone interested in taking advantage of this system through the use of weapons like Eve’s gun and the Security robots’ force fields.

Anything that causes such a serious effect should have an undo or an off switch. The duration of the countdown gives Wall-E plenty of time to react, but the pod should accept that panicked response as a request to turn the destruct off, especially as a fail-safe in case its biometric scan isn’t functioning properly, and there might be lives in the balance.

The Other Controls

No Labels.



This escape pod is meant to be used in an emergency, and so the automatic systems should degrade as gracefully as possible.

While beautiful, extremely well grouped by apparent function, and incredibly responsive to touch inputs, labels would have made the control panel usable for even a moderately skilled crewmember in the pilot seat. Labels would also provide reinforcement of a crew member’s training in a panic-driven situation.

Buy-N-Large: Beautifully Designed Dystopia


A design should empower the people using it, and provide reinforcement to expert training in a situation where memory can be strained because of panic. The escape-pod has many benefits: clear seating positions, several emergency launch controls, and an effective auto-pilot. Adding extra backups to provide context for a panicked human pilot would add to the pod’s safety and help crew and passengers understand their options in an emergency.


Dust Storm Alert


While preparing for his night cycle, Wall-E is standing at the back of his transport/home. On the back drop door of the transport, he is cleaning out his collection cooler. In the middle of this ritual, an alert sounds from his external speakers. Concerned by the sound, Wall-E looks up to see a dust storm approaching. After seeing this, he hurries to finish cleaning his cooler and seal the door of the transport.

A Well Practiced Design

The Dust Storm Alert appears to override Wall-E’s main window into the world: his eyes. This is done to warn him of a very serious event that could damage him or permanently shut him down. What is interesting is that he doesn’t appear to register a visual response first. Instead, we first hear the audio alert, then Wall-E’s eye-view shows the visual alert afterward.

Given the order of the two parts of the alert, the audible part was considered the most important piece of information by Wall-E’s designers. It comes first, is unidirectional as well as loud enough for everyone to hear, and is followed by more explicit information.


Equal Opportunity Alerts

By having the audible alert first, all Wall-E units, other robots, and people in the area would be alerted of a major event. Then, the Wall-E units would be given the additional information like range and direction that they need to act. Either because of training or pre-programmed instructions, Wall-E’s vision does not actually tell him what the alert is for, or what action he should take to be safe. This could also be similar to tornado sirens, where each individual is expected to know where they are and what the safest nearby location is.

For humans interacting alongside Wall-E units each person should have their own heads-up display, likely similar to a Google-glass device. When a Wall-E unit gets a dust storm alert, the human could then receive a sympathetic alert and guidance to the nearest safe area. Combined with regular training and storm drills, people in the wastelands of Earth would then know exactly what to do.

Why Not Network It?

Whether by luck or proper programming, the alert is triggered with just enough time for Wall-E to get back to his shelter before the worst of the storm hits. Given that the alert didn’t trigger until Wall-E was able to see the dust cloud for himself, this feels like very short notice. Too short notice. A good improvement to the system would be a connection up to a weather satellite in orbit, or a weather broadcast in the city. This would allow him to be pre-warned and take shelter well before any of the storm hits, protecting him and his solar collectors.

Other than this, the alert system is effective. It warns Wall-E of the approaching storm in time to act, and it also warns everyone in the local vicinity of the same issue. While the alert doesn’t inform everyone of what is happening, at least one actor (Wall-E) knows what it means and knows how to react. As with any storm warning system, having a connection that can provide forecasts of potentially dangerous weather would be a huge plus.