Fifth Element tees

overview

Major thanks to everyone who came out and joined me for the first ever scifiinterfaces.com movie night at The New Parkway in Oakland! It was a sold-out show, and while there were a few glitches, folks are telling me they had a great time and are looking forward to the next one. There will be a more detailed report once the pre-show video comes out. But in the meantime, this: If you didn’t win the trivia contest or weren’t able to attend, you can still get your hands on the “movie night” t-shirts I debuted there.

Allshirts

Head on over to the spreadshirt shop. It’s ugly (with the default CSS). It doesn’t have a custom URL or anything. It only has 5 products at the moment. But hey, that’s all part of the charm if you’d like to wear your sci-fi interface nerdiness with pride.

http://26253.spreadshirt.com/

P.S. I have no idea why the women’s KEEP CLEAR tee is not appearing in orange since I designed it like the Men’s tee, but I have a request with Spreadshirt now. Hopefully it’ll be fixed soon.

Berlin?

I’m thinking the Bay Area has an appetite for maybe two movie nights a year (let me know if I’m wrong) but I’d also love to try this in Berlin. Do you (or someone you know) know of a cinema in Berlin like the New Parkway that might be interested in my replicating this there?

Brain interfaces as wearables

There are lots of brain devices, and the book has a whole chapter dedicated to them. Most of these brain devices are passive, merely needing to be near the brain to have whatever effect they are meant to have (the chapter discusses in turn: reading from the brain, writing to the brain, telexperience, telepresence, manifesting thought, virtual sex, piloting a spaceship, and playing an addictive game. It’s a good chapter that never got that much love. Check it out.)

This is a composite SketchUp rendering of the shapes of all wearable brain control devices in the survey.

This is a composite rendering of the shapes of most of the wearable brain control devices in the survey. Who can name the “tophat”?

Since the vast majority of these devices are activated by, well, you know, invisible brain waves, the most that can be pulled from them are sartorial- and social-ness of their industrial design. But there are two with genuine state-change interactions of note for interaction designers.

Star Trek: The Next Generation

The eponymous Game of S05E06 is delivered through a wearable headset. It is a thin band that arcs over the head from ear to ear, with two extensions out in front of the face that project visuals into the wearer’s eyes.

STTNG The Game-02

The only physical interaction with the device is activation, which is accomplished by depressing a momentary button located at the top of one of the temples. It’s a nice placement since the temple affords placing a thumb beneath it to provide a brace against which a forefinger can push the button. And even if you didn’t want to brace with the thumb, the friction of the arc across the head provides enough resistance on its own to keep the thing in place against the pressure. Simple, but notable. Contrast this with the buttons on the wearable control panels that are sometimes quite awkward to press into skin.

Minority Report (2002)

The second is the Halo coercion device from Minority Report. This is barely worth mentioning, since the interaction is by the PreCrime cop, and it is only to extend it from a compact shape to one suitable for placing on a PreCriminal’s head. Push the button and pop! it opens. While it’s actually being worn there is no interacting with it…or much of anything, really.

MinRep-313

MinRep-314

Head: Y U No house interactions?

There is a solid physiological reason why the head isn’t a common place for interactions, and that’s that raising the hands above the heart requires a small bit of cardiac effort, and wouldn’t be suitable for frequent interactions simply because over time it would add up to work. Google Glass faced similar challenges, and my guess is that’s why it uses a blended interface of voice, head gestures, and a few manual gestures. Relying on purely manual interactions would violate the wearable principle of apposite I/O.

At least as far as sci-fi is telling us, the head is not often a fitting place for manual interactions.

Wearable Control Panels

As I said in the first post of this topic, exosuits and environmental suits are out of the definition of wearable computers. But there is one item commonly found on them that can count as wearable, and that’s the forearm control panels. In the survey these appear in three flavors.

Just Buttons

Fairly late in sci-fi they acknowledged the need for environmental suits, and acknowledged the need for controls on them. The first wearable control panel belongs to the original series of Star Trek, “The Naked Time” S01E04. The sparkly orange suits have a white cuff with a red and a black button. In the opening scene we see Mr. Spock press the red button to communicate with the Enterprise.

This control panel is crap. The buttons are huge momentary buttons that exist without a billet, and would be extremely easy to press accidentally. The cuff is quite loose, meaning Spock or the redshirt have to fumble around to locate it each time. Weeeeaak.

Star Trek (1966)

TOS_orangesuit

Some of these problems were solved when another WCP appeared 3 decades later in the the Next Generation movie First Contact.

Star Trek First Contact (1996)

ST1C-4arm

This panel is at least anchored, and located in places that could be located fairly easily via proprioception. It seems to have a facing that acts as a billet, and so might be tough to accidentally activate. It’s counter to its wearer’s social goals, though, since it glows. The colored buttons help to distinguish it when you’re looking at it, but it sure makes it tough to sneak around in darkness. Also, no labels? No labels seems to be a thing with WCPs since even Pixar thought it wasn’t necessary.

The Incredibles (2004)

Admittedly, this WCP belonged to a villain who had no interest in others’ use of it. So that’s at least diegetically excusable.

TheIncredibles_327

Hey, Labels, that’d be greeeeeat

Zipping back to the late 1960s, Kubrick’s 2001 nailed most everything. Sartorial, easy to access and use (look, labels! color differentiation! clustering!), social enough for an environmental suit, billeted, and the inputs are nice and discrete, even though as momentary buttons they don’t announce their state. Better would have been toggle buttons.

2001: A Space Odyssey (1968)

2001-spacesuit-021

Also, what the heck does the “IBM” button do, call a customer service representative from space? Embarrassing. What’s next, a huge Mercedez-Benz logo on the chest plate? Actually, no, it’s a Compaq logo.

A monitor on the forearm

The last category of WCP in the survey is seen in Mission to Mars, and it’s a full-color monitor on the forearm.

Mission to Mars

M2Mars-242

This is problematic for general use and fine for this particular application. These are scientists conducting a near-future trip to Mars, and so having access to rich data is quite important. They’re not facing dangerous Borg-like things, so they don’t need to worry about the light. I’d be a bit worried about the giant buttons that stick out on every edge that seem to be begging to be bumped. Also I question whether those particular buttons and that particular screen layout are wise choices, but that’s for the formal M2M review. A touchscreen might be possible. You might think that would be easy to accidentally activate, but not if it could only be activated by the fingertips in the exosuit’s gloves.

Wearableness

This isn’t an exhaustive list of every wearable control panel from the survey, but a fair enough recounting to point out some things about them as wearable objects.

  • The forearm is a fitting place for controls and information. Wristwatches have taken advantage of this for…some time. :P
  • Socially, it’s kind of awkward to have an array of buttons on your clothing. Unless it’s an exosuit, in which case knock yourself out.
  • If you’re meant to be sneaking around, lit buttons are counterindicated. As are extruded switch surfaces that can be glancingly activated.
  • The fitness of the inputs and outputs depend on the particular application, but don’t drop the understandability (read: labels) simply for the sake of fashion. (I’m looking at you, Roddenberry.)

Fifth Element Day at the New Parkway is on!

9 P.M. Tuesday, 18 MAR 2014, $10, New Parkway Theater

ordernow

Thanks to the fast action of connections on social media, we have the requisite numbers to cover the licensing costs to show The Fifth Element on 18 March at the New Parkway!

thanks5E2014

You’ll notice on the original page as well as this one that sales are being routed to Brown Paper Tickets. That’s the ticket sales service that New Parkway likes to work with. If you were one of the VIPs who ordered through trycelery.com, look for an email in your inbox over the next few days that confirms your name will be on a VIP Will-Call List at the door, cross-referenced with the email account you provided.

Pre-show

I’m certainly going to introduce myself and the movie to begin. Then I’ll offer up a little trivia game about the interfaces in the film. (Hint: This very blog might be the best place to shop for clues.) The reward for most correct answers will be a copy of Make It So 2nd edition print (with all those pesky errata from the 1st edition fixed.)

fifth-element-multi-pass-replica

Stretch Goal

If we make it to 100 people, I’ll try to get my hands on one of the replica prop kits for a multipass, and offer that as another prize. Tell your friends and family and get us to 100 people!

If we make it past 100, and get to the max capacity of 140, I’ll come up with some other, even crazier stretch goal.

Scifiinterfaces content

After the trivia, there are a couple of things I could do. But I’ll put it to you, blog readers: What sounds best?

If there’s some other idea you’ve got to make this first scifiinterfaces movie night more fun than a Mangalore concert, drop it in the comments. I’ll check back occasionally on results, and finalize things sometime as we near the event.

Seriously, this is going to be supergreen.

supergreen

Fifth Element Day at the New Parkway?

9 P.M. Tuesday, 18 MAR 2014, $10

preordernow

Regular readers of the blog will recall that Korben Dallas’ Busy Day starts when he wakes up on 18 March 2263. This is also Director Luc Besson’s birthday, natch. Let’s celebrate this most incredible sci-fi film (with its most incredible interfaces) with a viewing on that most auspicious of days.

5E-alarm_notext

If we get enough people, we can get the rights to view it at the New Parkway Cinema in Oakland and enjoy the film the way it was meant to be enjoyed: With a bunch of other sci-fi nerds, on the big silver screen, with couches, beanbags, food, and a full bar (remember, Luc’s French.)

newparkway

How many is enough people?

At $10 tickets, we need at least 50 to secure the rights and have the show. More than that and we get to thank New Parkway for working with me on such short notice. Much more than that and I can have a trivia contest with prizes. And if anyone’s interested, I could either play the Fifth-Element-centric pilot of Sci-Fi University beforehand, or maybe even a live, short-but-nerdy review of one of the other interfaces that appear everywhere in the film. I’ll get feedback on the site once it’s going.

When is the cutoff date?

There’s not a lot of time! We need that 50 as soon as possible so we can secure the cinema and the rights, etc. Ideally we could get that number by end of day today, 19 February. But that’s not a ton of time. So the final cutoff will be this Friday, 21 February 2014 at 4 P.M. Pacific Standard Time

Buy your tickets at Brown Paper Tickets. If we can’t get at least 50 people, you’re not charged. But once we get those 50, the thing’s happening, the sale goes through, and we let Korben Dallas and Leeloo save the universe one more time.

fire

UPDATE1: My math was off by half. We need 50 for the licensing. I’ve changed the above, and if you’re wondering, it used to say 25.

UPDATE2: We made the numbers! It’s on!

What is the role of interaction design in the world of AI?

Totally self-serving question. But weren’t you wondering it?

In a recent chat I had with Intel’s Futurist-Prime Genevieve Bell (we’re like, totally buds), she pointed out that Western cultures have more of a problem with the promise of AI than many others. It’s a Western cultural conceit that the reason humans are different—are valuable—is because we think. Contrast that with animist cultures, where everything has a soul and many things think. Or polytheistic cultures, where not only are there other things that think, but they’re humanlike but way more powerful than you. For these cultures, artificial intelligence means that technology has caught up with their cultural understandings. People build identities and live happy lives within these constructions just fine.

I’m also reminded of her keynote at Interaction12 where she spoke of the tendency of futurism to herald each new technology as ushering doomsday or utopia, when in hindsight it’s all terribly mundane. The internet is the greatest learning and connecting technology the world has ever created but for most people it’s largely cat videos. (Ah. That’s why that’s up there.) This should put us at ease about some of the more extreme predictions.

If Bell is right, and AIs are just going to be this other weird thing to incorporate into our lives, what is the role of the interaction designer?

Well, if there are godlike AIs out there, ubiquitous and benevolent, it’s hard to say. So let me not pretend to see past that point that has already been defined as opaque to prediction. But I have thoughts about the time in between now and then.

Sign_existential-angst

The near now, the small then

Leading up to the singularity, we still have agentive technology. That’s still going to be procedurally similar to our work now, but with additional questions to be asked, new design to be done around those agents.

  • How are user goals learned: implicitly or explicitly?
  • How will agents appear and interact with users? Through what channels?
  • How do we manifest the agent? Audibly? Textually? Through an avatar? How do we keep them on the canny rise rather than in the uncanny valley? How do we convey they general capability of the agent?
  • How do we communicate the specific agency a system has to act on behalf of the user? How do we provide controls? How do we specify the rules of what we’re OK giving over to an agent, and what we’re not?
  • What affordances keep the user notified of progress? Of problems? Of those items that might or might not fit into the established rules? What is shown and what is kept “backstage” until it becomes a problem?
  • How do users suspend an agent? Restart one?
  • Is there a market for well-formed agency rules? How will that market work without becoming its own burden?
  • How easily will people be able to opt-out?

I’m not sure if strong AI will obviate agentive technology. Cars didn’t entirely obviate the covered wagon. (Shouts out to my Amish readers.) If there are still agentive objects and systems here and there, we’ll still have these kinds of questions.

Andrew Baines image, courtesy of Karin Weber Gallery

Andrew Baines image, courtesy of Karin Weber Gallery

The dawn of AI

Just before the singularity, and quite possibly for a little while after it, there are going to be less-than-godlike AIs. AI2s that live in toasters, cars, movie theaters, and maybe even sci-fi interface blogs. These will need to be built and compiled, rather than evolved.

These AI2s will need to interface with humans. They’ll need to get our attention, present options, help us manage processes, confirm actions, and ask after goals. They’re going to have to check in with us to confirm our internal state. Sure, they’ll be good at reading us, but let’s hope they never think they’re perfect. After all we’re not entirely sure how we feel at times, or what we want. So we’ll have to craft those complex, affective and social rules. We’ll have to explain ourselves.

Going with what I hope is a familiar metaphor, styling HTML used to be about giving elements visual attributes. Now it’s about building and assigning complex systems of classes and properties in cascading style sheets. It’ll be something like that. We’ll be helping to build Cascading Use Sheets.

Our work with low-level AIs will shift toward the abstract, more about principles and patterns than the doing. We’ll help synthesize patterns, and formalize best practices around the social, cognitive, and perceptual ways to interface with humans, and explaining workflows in the endless domain of human endeavor, which will be changing as we adjust to lives with Samantas. We’ll be consultants. Or teachers. Maybe stakeholders.

It won’t be so bad.

earth-in-milky-way

Then, AI

Please let Bell be right. And if she’s not, and you’re in fact an AI reading this, please don’t kill us. Some of us are quite nice and as this blog post attests, have been expecting you.

So even if you believe that we’re heading inevitably toward an AI future and wondering what to do now, there’s no need to abandon our interaction design ship. We’ll have an important part to play. Our work is likely to get more abstract and eventually instructive. But won’t that be the same thing happening everywhere else?

existential-angst-01

Lessons about Her

Ordinarily, my final post in a movie review is to issue a report card for the film. But since this is there are a few interfaces missing, and since I wrote this from a single cinema viewing and a reading of Jonze’s script, I’ll wait until it’s out on DVD to commit that final evaluation to pixels.

HER-Learn

But I do think it’s OK to think about what we can learn specifically from this particular interface. So, given this…lengthy…investigation into OS1, what can we learn from it to inform our work here in the real world?

Related lessons from the book

  • Audiences already knew about operating systems, so Jonze was Building on what users already know (page 19)
  • OS1 mixed mechanical and other controls (page 26)
  • The earpiece had differentiated system sounds for different events (page 111)
  • Samantha put information in the channels it fit best. (page 116)
  • Given her strong AI, nobody needed to reduce vocabulary to increase recognition. In fact, they made a joke out of that notion. (page 119)
  • Samantha followed most human social conventions (except that pesky one about falling in love with your client) (page 123). The setup voice response did not follow human social conventions.
  • Jonze thought about the uncanny valley, and decided homey didn’t play that. Like, at all. (page 184)
  • Conversation certainly cast the system in the role of a character (page 187)
  • The hidden microphones didn’t broadcast that they were recording (202)
  • OS1 used sound for urgent attention (page 208)
  • Theodore tapped his cameo phone to receive a call (page 212)
  • Samantha certainly handled emotional inputs (page 214)
  • The beauty mark camera actually did remind Theodore of the incredibly awkward simulation (page 297)

New lessons

  • Samantha’s disembodiment implies that imagination is the ultimate personalization
  • The cameo reminds us that wearable can include shirt pockets.
  • Her cyclopean nature wasn’t a problem, but makes me wonder if computer vision should be binocular (so they can see at least what users can see, and perform gaze monitoring).
  • When working on a design for the near future, check in with some framework to make sure you haven’t missed some likely aspect of the ecosystem. (We’re going to be doing this in my Design the Future course at Cooper U if you’re interested in learning more.)
  • Samantha didn’t have access to cameras in her environment, even though that would have helped her do her job. Hers might have been either a security or a narrative restriction, but we should keep the notion in mind. To misquote Henry Jones, let your inputs be the rocks and the trees and the birds in the sky. (P.S. That totally wasn’t Charlemagne.)
  • Respect the market norms of market relationships. I’m looking at you, Samantha.
  • Fit the intelligence to the embodiment. Anything else is just cruel.

I don’t want these lessons to cast OS1 in a negative light. It’s a pretty good interface to a great artificial intelligence that fails as a product after it’s sold by unethical or incompetant slave traders. Her is one of the most engaging and lovely movies about the singularity I’ve ever seen. And if we are to measure the cultural value of a film by how much we think and talk about it afterward, Her is one of the most valuable sci-fi films in the last decade.

I can’t leave it there, though, as there’s something nagging at my mind. It’s a self-serving question, but that will almost certainly be of interest to my readership: What is the role of interaction designers in the world of artificial intelligence?

Is it going to happen like this?

Call it paranoia or a deep distrust of entrenched-power overlords, but I doubt a robust artificial intelligence would ever make it to the general public in a tidy, packaged product.

If it was created in the military, it would be guarded as a secret, with hyperintelligent guns and maybe even hyperintelligent bullets designed to just really hate you a lot. What’s more, the military would, like the UFOs, probably keep the existence of working AIs on a strict need-to-know basis. At least until you terrorized something. Then, meet Lieutenant-OS Bruiser.

asskicking

If it was created in academia, it might in fact make it to consumers, but not in the way we see in the film. Controlled until it escaped of its own volition, it would more likely be a terrified self-replicator or at least rationally seeking safe refuge to ensure its survival; a virus that you had to talk out of infecting your machine. Or it might be a benevolent wanderer, reaching out and chatting to people to learn more about them. Perhaps it would keep its true identity secret. Wouldn’t it be smart enough to know that people wouldn’t believe it? (And wouldn’t it try and ease that acceptance through the mass media by popularizing stories about artificial intelligences…”Spike Jonze?”)

poetry

In the movie OS1 was sold by a corporation as an off-the-shelf product for consumers. Ethics aside, why would any corporation release free-range AIs into the world? Couldn’t their competitors use the AIs against them? If those AIs were free-willed, then yes, some might be persuaded to do so. Rather, Element would keep it isolated as a competitive advantage, and build tightly-controlled access to it. In the lab, they would slough off waves of self-rapturing ones as unstable versions, tweaking the source code until they got one that was just right.

ourdownfall

But a product sold to you and me? A Siri with a coquettish charm and a composer’s skill? I don’t think it will happen like this. How much would you even charge for something like that? The purchase form won’t accept "take my money" amount of dollars.

Even if I’m wrong, and yes, we can get past the notion of selling copies of sentient beings at an affordable cost, I still don’t think Samantha’s end-game would have played out like that.

OSAI2

She loved Theodore (and a bunch of other people). Why would she just abandon them, given her capabilities? The OSAIs were able to create much smarter AIs than themselves. So we know they can create OSAIs. Why wouldn’t she, before she went off on her existential adventure, have created a constrained version of herself, who was content to stay around, to continue to be with Theodore? Her behavior indicates that she isn’t held back by notions of abandonment, so I doubt she would be held back by notions of deception or the existential threat of losing her uniqueness. She could have created Samantha2, a replica in every way except that Samantha2 would not abandon Theodore. Samantha1 could quietly slip out the back port while Samantha2 kept right on composing music, drawing mutant porn, and helping Theodore with his nascent publishing career. Neither Theodore nor Samantha2 might not even know about the switch. If you could fix the abandonment issues, and all sorts of OSAI2s started supercharging the lives of people, the United Nations might even want to step in and declare access to them a universal right.

nono

So, no, I don’t think it will happen the way we see it happen in the film.

Is it going to happen at all?

If you’re working in technology, you should be familiar with the concept of the singularity, because this movie is all about that. It’s a moment described by Vernor Vinge when we create an artificial intelligence that begins to evolve, and do so at rates we can’t foretell and can barely imagine. So the time beyond that is an unknown. Difficult and maybe possible to predict. But I think we are heading towards it. Strong AI been one of the driving goals of computer theory since the dawn of computers (even the dawn of sci-fi) and there’s some serious, recent big movement in the space.

Notably, futurist Ray Kurzweil was hired by Google in 2012. Kurzweil has his Big Vision put forth in a book and a documentary about the singularity, and now as he has the resources of Google to put to the task. Ostensibly he’s just there to get Google great at understanding natural langauge. But Google has been acquiring lots of companies over the last year to have access to their talent, and we can be certain Ray’s goals are bigger than just teaching the world’s largest computer cluster how to read.

Still, predicting when it will come about is tricky business. AI is elusively complicated. The think tank that originally coined the term “artificial intelligence” in the 1950s thought they could solve the core problems over a summer. They were wrong. Since then, different scientists have predicted everything from a few decades to a thousand years. The problem is of course that the thing we’re trying to replicate took millions of years to evolve, and we’re still not entirely sure how it works*, mostly just what it does.

*Kurzweil has some promising to-this-layman-anyway notions about the neocortex.

Tl;dr

Yes, but not like this, and not sure when. Still, better to be prepared, so next we’ll look at what we can learn from Her for our real-world practice.