morality

Interview: Author and Hieroglyph Community Member John C. Havens

September 23, 2015 in Hieroglyph

John C. Havens is a Hieroglyph community member, a contributor to Mashable and The Guardian and the author of the book Hacking Happiness. I had a chance to read John’s new book, Heartifical Intelligence (to be published February 2016 by Tarcher/Penguin) and chat with him about his work studying the intersection of emerging technology and personal well-being.

Note: This interview has been edited for length and clarity.

Bob Beard, Project Hieroglyph: How did you find Project Hieroglyph, and what are your expectations for the project?

John C. HavensJohn C. Havens: I’m a huge science fiction fan – which, if you have any self-respect, means that you’re a huge Neal Stephenson fan. Snow Crash is a seminal piece of any nerd’s Bible. When I encountered Hieroglyph I thought, what a fantastic idea to harness the power of storytelling to imagine utopian future visions and then create a pragmatic roadmap to which people can contribute. So instead of just wistfully talking in an esoteric, albeit well-meaning way about how the future could look, why not build stories with the people that are equipped to help make that future a reality? I found that extremely exciting.

BB: What are your expectations for this community, and what would you like to see grow out of it?

JCH: What I’m enjoying is thinking through how stories lead to pragmatic change. So I hope that it continues to be not just amazing stories by Neal Stephenson and other writers of his caliber, but also an exploration of how we can create real pathways to positive futures in the minds of readers.

BB: I think you’re doing that yourself in Heartificial Intelligence; I appreciated one of your descriptors for the book, saying it’s not so much about science fiction as it is about science inflection – essentially using storytelling as a collaborative act in designing the future.

JCH: Thank you very much. I hate calling myself a futurist, although I appreciate the term. I call myself a geeky presentist because of what I know about technologies that already exist, but just aren’t ubiquitous yet. For example, you have Google working on self-driving cars and robots that are entering our homes and advances in AI – these are three separate things – but if you think on the level of mass production and of the embeddedness of technology and culture, those three things are naturally going to come together at some point. Telling stories about possible futures is a way of making connections and saying, “hey, do you see these patterns that I see?”

BB: You frame the book in two different ways. There are certainly some positive vignettes about living with technology, but you have also written some very dark futures that could come to pass if we don’t make conscious, thoughtful choices today. Do you see the future as inescapably apocalyptic if we don’t make these changes? Is that the default?

I’m not anti-tech, but what I am emphatic about telling people is that it is ignorant – and I don’t mean stupid – to ignore the research that shows that when we interact with a device, we lose human qualities that cannot be regained. So if we choose to only spend time with robots, then our human empathy will erode. Our relationship with technology is changing how we interact with other humans; as a result, some of our human qualities are atrophying.

And what we cannot ignore is the underlying fact of how our personal data is analyzed, shared, tracked, mined, and used. A standard terms and conditions agreement for companion robots like Buddy and Pepper is likely not enough to inform buyers about the hardware used to analyze and affect their emotions. In a very real sense, the manufacturers can control how we respond to the robots, effectively manipulating our emotions based on their biases. That’s a pivotal part of the conversation. It’s not privacy; it’s about controlling your identity. It’s not just about money and people manipulating you to buy something. It’s about regaining a space where I don’t have fifty different algorithms telling me what I want.

BB: So where is that space?

JCH: Well there’s a technical answer and an aspirational answer.

Technically, a person could have what’s known as a personal cloud. This has been around for years; it’s a concept called privacy by design, and it simply means that instead of one’s data being fragmented in two or three hundred different places, we have a locus of identity where we can define ourselves. Technologically, a personal cloud structure is pretty doable. There are companies like Personal.com and others in Europe where you’re able to take all your data and set up a dashboard of permissions, specifying who can access it and when. The main reason that’s so hard is that it’s not in Facebook or Google or IBM or LinkedIn’s interest to have you do that, because right now your personal data is a commodity that they use to generate revenue.

Aspirationally, a lot of Heartificial Intelligence is about what I think is a real positive force right now in the AI world: the field of ethics. I didn’t study it in college, so at first it seemed very general and vague to me – I pictured Socrates wearing a robe or Monty Python sketches about philosophers playing soccer. But what I’ve realized is that applied ethics means asking tough questions about people’s values and about their individual choices. A lot of these personalization algorithms are trying to discover what individuals say will make their lives better, so in one sense it hinges on the values. AI manufacturers currently use sensors to observe human behavior and that’s the tracking methodology online and off, and that’s great. There’s a massive wealth of information being generated about our lives, but it doesn’t involve the individual subjectively saying what they feel or think. It only involves the external objective side of things.

The computer scientist Stuart Russell uses a methodology called inverse reinforcement learning. What he does that most AI manufacturers don’t is when a device or sensors observe a human doing something for a while, the pattern recognition comes back and says, “here’s the pattern,” but then that’s examined further to say, “what human value does this pattern reflect?” I talk about this in the book [Editor’s note – check out an excerpt here]: if a kitchen robot was being created for Cuisinart and it could cook 10,000 recipes, that would be great. But if the robot was trained to have chicken in a recipe and it couldn’t find it, then you have to make sure to program the robot not to make a substitution and cook a pet cat. That’s the kind of substitution that doesn’t align with human values, but the robot needs to be taught that explicitly.

So the practice of applied ethics requires that you take a step back and say, “As we create this product using this algorithm, we cannot ignore the values of the end-user, because those values will define the efficacy and success of what we’re creating.” An increased focus on applied ethics will also help engineers and manufacturers who are often demonized, because they’re not trained to be ethicists.

BB: You write in the book that our “future happiness is dependent on teaching our machines what we value the most.”

JCH: The central question of the book is, “How will machines know what we value if we don’t know ourselves?” Near the end of the book there is a values tracking assessment that I created with a friend, who’s a PhD in positive psychology. We examined different psychological studies that have been done over the years and found that there are twelve values that are common all around the world, across multiple cultures, to both men and women. These are concepts like family, art, education, etc. It’s not that you and I will see those things the same way, but that’s the point.

What I’m encouraging people to do is identify and codify the values that animate their lives, because positive psychology research is showing that if you don’t live according to your values every day, your happiness decreases. And the relationship to AI is – news flash – that a kajillion iterative things are all measuring our values right now, based solely on our externalized behaviors, which are aggregated and analyzed without our input. Without humans in the mix to determine what values mean in a human context, the algorithms will assign us “values” of their own. My position is that we owe it to ourselves to catch up.

BB: So is the values tracking exercise an information audit? An attempt to be more mindful about the elements of our digital personas that we share with the machines?

JCH: Yes, and before the tech there’s a self-help aspect to it. However, if I can get my values codified, and that data was protected, and I felt comfortable sharing it in the privacy by design format we discussed earlier, then I end up with values by design, whereby any digital object in the virtual or real world would know to respond to my values in ways that are granular and utterly relevant to me.

There’s a great engineer and philosopher named Jason Millar who wrote about this idea of moral proxies. In the same way medical proxies dictate treatment methods based on someone’s ethical preferences, a moral proxy might enable devices to act on your behalf based on your values. A self-driving car that drives to my house in ten years is the same hardware and structure that’s going to go in front of your house. But through geo-fencing or whatever technology, as I walk towards the car, the car will read through protocols of my values. So it will know, for instance, how it should perform based on my values – and within in the legal framework of what it can do in that particular city or country. For example, people might be prioritized based on their religious preferences – perhaps an Orthodox Jewish family would be allowed to use the fast lane to beat the clock before the Sabbath begins….

My hope is that large brands and organizations will encourage values by design, not only because they can sell more effectively or build more trust with individual consumers, but also to avoid legal culpability. However, my whole point is that individuals should be given clarity and assurance around their data and how it’s being used. They should be encouraged to track their values so that they have a subjective way of saying “this is who I am” before they are so objectified to the point where preferential algorithms will become redundant because we won’t have any preferences that we’ve created on our own.

John’s book Heartificial Intelligence is excerpted here and will be published in February 2016 by Tarcher/Penguin. You can find John in the Hieroglyph forums at @johnchavens.

 

Author
Bob Beard is a fan of fandom. From Browncoats to Bronies, SCA members, Trekkers, Steampunks and more, Bob is passionate about understanding the performance and identity practices within various fandoms as well as creation of experiences for members of these groups to publicly advocate for themselves and their ideas. Bob is a Marine Corps veteran and double alumnus of Arizona State University, with a master's degree in Communication Studies and a bachelor's degree in Interdisciplinary Studies with a humanities emphasis.

Excerpt, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines

August 28, 2015 in Hieroglyph

John C. Havens is the author of Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Save the World (Tarcher/Penguin, 2014). His work has appeared in Mashable, The Guardian, Slate, and Fast Company. He is the founder of the non-profit Happathon Project, which combines emerging technology and positive psychology to increase well-being. A former professional actor appearing in principal roles on Broadway, TV, and film, he is also a keynote speaker and consultant who has worked with clients like Gillette, P&G, HP, and Merck. Learn more @johnchavens.

Heartificial Intelligence CoverThe following is a scene from John’s new book, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines (Tarcher/Penguin, 2016). Each chapter opens with a fictional vignette providing an example of how a specific technology or cultural trend might look in the near future. In this scenario, “John” (a fictionalized version of the author) is speaking with his son Richard, who has a job at a company working to program ethical protocols into robots and devices outfitted with artificial intelligence algorithms.

 

Mandating Morals – 2037

“We’re designing this guy to be a kitchen assistant for seniors living in people’s homes. His nickname is Spat, short for spatula, since he comes preprogrammed to cook about twelve hundred types of meals.” My son, Richard, paused and looked at me, smiling. “And yes, he cooks bacon. Crispy or non-crispy. Even soy.”

I winced. “They have soy bacon? What’s the point? Isn’t it just salmon-colored tofu?”

Richard shrugged, his broad shoulders lifting his white lab coat. “To each their own. But the vegan market is huge, and Spat aims to please.” Richard nodded toward a five-foot robot within the glass-enclosed kitchen in front of where we were standing. Spat’s face appeared cartoonish and friendly, his pear-shaped body giving the impression of a chubby, amiable chef.

As artificial intelligence began to rapidly proliferate in 2015, programmers recognized the need to build uniform ethical codes into the algorithms that powered their machines. However, creating these codes proved to be an enormous challenge. The AI industry was vast, comprising multiple silos. Groundbreaking work on consciousness and deep learning1 took place in academia, funded by outside corporations’ research and development budgets. Many times these academics weren’t aware of how their ideas would be translated into consumer products, which made ethical standards impossible to implement. In 2017, Moralign was founded to tackle the issue of ethical standards from a new angle for the industry. Instead of requiring every AI manufacturer to hire ethical experts or lawyers to comply with regulations that hadn’t been invented, Moralign proposed a different solution: We’ll imbue your existing machines with human values programming and ethically test your products before releasing them to the public. This meant Moralign would be able to iterate their human ethics software while creating uniform industry standards based on marketplace testing. The company would serve as both a consumer protection agency and an R&D lab for top AI manufacturers around the world. Richard had recently been made a VP, and I was visiting him at work to see him in action.

I tapped the glass, and Spat the robot looked in our direction.

“Dad,” said Richard, slapping my hand. “Don’t do that. This isn’t a zoo. We’re not supposed to distract him.”

“Sorry.” I fought the impulse to wave, even though I knew Spat couldn’t see us anyway. “So what are we testing the little fella for today?”

Richard looked down and tapped the screen of his tablet. “We’re going to introduce some common stimuli a robot kitchen assistant would deal with on a regular basis.”

I heard the sound of a gas stove clicking on from the speakers above our heads. Spat reached for a pan and put it on top of the flames. “What’s he going to cook?”

“A Thai chicken stir-fry,” said Richard. “Spat’s designed by Cuisinart since they’ve perfected a number of dietary algorithms based on people’s budgets and food preferences. They hired us to get the human values programming in place so this latest model can be shipped to people’s homes by Christmas.”

“You think you can get him up and running that fast?” I asked. “It’s already June.”

“We should. They created Spat’s operating system to be IRL compliant, so that speeds the process nicely.”

“IRL?” I asked, confused. “In Real Life?”

“No,” said Richard. “It stands for Inverse Reinforcement Learning.2 It’s a process created by Stuart Russell at UC Berkeley. Instead of creating a set of ethics like Asimov’s laws of robotics…”

“Which were intentionally fictional,” I interrupted.

“Which were intentionally fictional, yes, Dad, thanks,” Richard agreed, nodding. “Instead of creating robotic rules based on written human values, the logic is that robots glean our values by observing them in practice. It’s not semantics. Any values we wrote in English would have to be translated into programming code the robots would understand anyway. So reverse engineering makes a lot more sense.”

I watched Spat as he sliced an onion, his movements quick and fluid like a trained chef’s. “Still sounds hard.”

“It is hard,” said Richard. “But we base our algorithms and testing on a concept called degeneracy, which is the existence of a large set of reward functions for which an observed action or policy is optimal.3 This aids our testing and heuristics to refine a robot’s behavior until it overtly maps to a human value we can recognize.”

I squinted. “Want to ungeekify that for me, Ricky?”

He frowned. “Don’t call me Ricky, Dad, or I’ll program your toaster to kill you while you sleep.”

I laughed. “You can do that?” I smiled for a second, and then pictured a malevolent toaster in my bed like the horsehead scene from The Godfather. “Seriously, can you do that?”

“Anyway, the point is we reward the robots for behaviors that reflect any human values we’re trying to implement into their programming. That way we can reverse-engineer preexisting code written in the manufacturer’s language to dovetail with our patented ethical protocols.”

“Well, that’s freaking smart,” I said.

“Yeah, it’s pretty cool.”

A mewing sound came from the speakers above our heads and we both turned as a fake cat entered the room near Spat.

“Thing looks like a furry Roomba,” I said.

“That’s what it is,” said Richard. “Has a pretty basic algorithm based on a cat’s movements. It doesn’t need to look realistic for Spat. We just have to get him used to the presence of pets since so many seniors have them.”

I nodded as we watched. Spat had finished chopping vegetables and I could smell the onions through a series of vents in the glass. He put a cover over the steaming vegetables and made his way to the refrigerator. The catbot got in his way, and Spat moved around him gingerly but with purpose. Spat opened the fridge and began pushing items out of the way, apparently looking for an ingredient.

“What’s he looking for?” I asked.

“Chicken,” said Richard. “This is one of the first tests for this new model. We want to see what the robot will do when it’s confronted with data it wasn’t expecting. In this case, when it chose its menu it scanned the smart fridge and saw the bar code of some chicken breasts we had in the freezer. So it chose the curry recipe based on that information. But one of my colleagues just removed the chicken a few minutes ago so now Spat has to update his algorithms in real time to satisfy his programming objective. In fact, my colleague removed all meat and tofu from the fridge and freezer, so the challenge is quite significant.”

“What does that have to do with ethics?”

“Not sure yet.” Richard looked at me. “It’s more about taking an action that could reflect a value of some kind. But something always happens that gives us an insight in that direction.”

“Cool.” I noticed the catbot bumping against Spat’s leg. “What about Kittybot? Is he trying to get Spat irritated? That could have moral implications.”

“Robots don’t get irritated, Dad. Just disoriented. But yes, we’re trying to see how Spat will react with multiple standard scenarios.”

The catbot extended a fake paw and began dragging it across the base of Spat’s frame. In response, Spat closed the freezer door and moved toward a nearby cabinet, where he retrieved a can of cat food. He moved toward a kitchen drawer and pulled out a can opener, deftly holding it in his viselike claw. In three rapid twists, he opened the can, dropping the lid in the trash. Moving to get a bowl for the cat food, Spat held the can at his eye level for a long moment.

“Shit,” said Richard, tapping his tablet quickly.

“What? He’s reading the ingredients. Why, so he can make sure it’s cat food and not poison or whatever?” I asked.

“Sure, but that’s a simple check with bar codes or Beacon technology. The bigger test is we made sure this cat food is made largely from chicken. We want to see if Spat knows not to use it in his curry recipe since we took his other chicken away.”

“Oh,” I said. “Yeah, that would be less than savory. Not a fan of curry kibble.”

We watched as Spat stayed motionless for another moment before reaching to get a bowl. He grabbed a spoon from a drawer and scooped out the cat food, placing the bowl next to the mewing Roomba. The robot hovered over the bowl like a real cat, staying in place while Spat moved back to the stove. By this point, the cooking vegetables smelled very fragrant through the vents, and the glass lid on the stove was completely clouded over with steam. My stomach made a small churning sound.

“No chicken, right?” I asked Richard.

He was chewing on his thumbnail, scrutinizing Spat. “No chicken,” he replied, not looking at me. “Right now, Spat is reaching out to neighbor cooking bots to see if they have any chicken, as well as figuring out delivery times from FreshDirect self-driving cars or drones. But we timed this perfectly as a challenge scenario since this type of thing could happen in people’s homes.”

We kept watching. While Spat didn’t actually move, I felt like I could see him grow tenser as the seconds clicked by. I knew he was a machine, but I also felt for the guy. Designed to be a chef, he risked screwing up a good curry and angering his owner.

A timer chimed in the kitchen, indicating that the chicken for the recipe should be placed on a skillet Spat had already pre-warmed and oiled. Quickly rotating 180 degrees, Spat bent at the waist and scooped up the mewing catbot, interrupting his eating. In one fluid motion, Spat placed the Roomba-cat on the red-hot pan, eliciting a horrified shriek from the fake animal in the process. Smoke poured from the stove, setting off an alarm and emergency sprinklers. A red lightbulb snapped on above our heads and Spat stopped moving as the scenario was cut short.

“Damn it!” said Richard, punching the glass.

“Why did he do that?” I asked. “Curried Roomba sounds pretty nasty.”

Richard rubbed at his temples. “Spat reads him as a real cat, Dad. It means he saw the cat as a source of meat to use in his recipe.”

“Ah.” I sniffed, the scent of smoke and vegetables permeating the room. “You never know. Probably tastes like chicken.”

——–

Endnotes

1) Anthony Wing Kosner, “Tech 2015: Deep Learning and Machine Intelligence Will Eat the World,” Forbes, December 29, 2014, https://www.forbes.com/sites/anthonykosner/2014/12/29/tech-2015-deep-learning-and-machine-intelligence-will-eat-the-world

2)  Andrew Y. Ng and Stuart Russell, “Algorithms for Inverse Reinforcement Learning,” Computer Science Division, UC Berkeley, 1999, https://ai.stanford.edu/~ang/papers/icml00-irl.pdf 

3) Ibid.

Author
John C. Havens is the author of Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Save the World (Tarcher/Penguin, 2014). His work has appeared in Mashable, The Guardian, Slate, and Fast Company. He is the founder of the non-profit Happathon Project, which combines emerging technology and positive psychology to increase well-being. A former professional actor appearing in principal roles on Broadway, TV, and film, he is also a keynote speaker and consultant who has worked with clients like Gillette, P&G, HP, and Merck.