Excerpt, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines

John C. Havens is the author of Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Save the World (Tarcher/Penguin, 2014). His work has appeared in Mashable, The Guardian, Slate, and Fast Company. He is the founder of the non-profit Happathon Project, which combines emerging technology and positive psychology to increase well-being. A former professional actor appearing in principal roles on Broadway, TV, and film, he is also a keynote speaker and consultant who has worked with clients like Gillette, P&G, HP, and Merck. Learn more @johnchavens.

Heartificial Intelligence CoverThe following is a scene from John’s new book, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines (Tarcher/Penguin, 2016). Each chapter opens with a fictional vignette providing an example of how a specific technology or cultural trend might look in the near future. In this scenario, “John” (a fictionalized version of the author) is speaking with his son Richard, who has a job at a company working to program ethical protocols into robots and devices outfitted with artificial intelligence algorithms.

 

Mandating Morals – 2037

“We’re designing this guy to be a kitchen assistant for seniors living in people’s homes. His nickname is Spat, short for spatula, since he comes preprogrammed to cook about twelve hundred types of meals.” My son, Richard, paused and looked at me, smiling. “And yes, he cooks bacon. Crispy or non-crispy. Even soy.”

I winced. “They have soy bacon? What’s the point? Isn’t it just salmon-colored tofu?”

Richard shrugged, his broad shoulders lifting his white lab coat. “To each their own. But the vegan market is huge, and Spat aims to please.” Richard nodded toward a five-foot robot within the glass-enclosed kitchen in front of where we were standing. Spat’s face appeared cartoonish and friendly, his pear-shaped body giving the impression of a chubby, amiable chef.

As artificial intelligence began to rapidly proliferate in 2015, programmers recognized the need to build uniform ethical codes into the algorithms that powered their machines. However, creating these codes proved to be an enormous challenge. The AI industry was vast, comprising multiple silos. Groundbreaking work on consciousness and deep learning1 took place in academia, funded by outside corporations’ research and development budgets. Many times these academics weren’t aware of how their ideas would be translated into consumer products, which made ethical standards impossible to implement. In 2017, Moralign was founded to tackle the issue of ethical standards from a new angle for the industry. Instead of requiring every AI manufacturer to hire ethical experts or lawyers to comply with regulations that hadn’t been invented, Moralign proposed a different solution: We’ll imbue your existing machines with human values programming and ethically test your products before releasing them to the public. This meant Moralign would be able to iterate their human ethics software while creating uniform industry standards based on marketplace testing. The company would serve as both a consumer protection agency and an R&D lab for top AI manufacturers around the world. Richard had recently been made a VP, and I was visiting him at work to see him in action.

I tapped the glass, and Spat the robot looked in our direction.

“Dad,” said Richard, slapping my hand. “Don’t do that. This isn’t a zoo. We’re not supposed to distract him.”

“Sorry.” I fought the impulse to wave, even though I knew Spat couldn’t see us anyway. “So what are we testing the little fella for today?”

Richard looked down and tapped the screen of his tablet. “We’re going to introduce some common stimuli a robot kitchen assistant would deal with on a regular basis.”

I heard the sound of a gas stove clicking on from the speakers above our heads. Spat reached for a pan and put it on top of the flames. “What’s he going to cook?”

“A Thai chicken stir-fry,” said Richard. “Spat’s designed by Cuisinart since they’ve perfected a number of dietary algorithms based on people’s budgets and food preferences. They hired us to get the human values programming in place so this latest model can be shipped to people’s homes by Christmas.”

“You think you can get him up and running that fast?” I asked. “It’s already June.”

“We should. They created Spat’s operating system to be IRL compliant, so that speeds the process nicely.”

“IRL?” I asked, confused. “In Real Life?”

“No,” said Richard. “It stands for Inverse Reinforcement Learning.2 It’s a process created by Stuart Russell at UC Berkeley. Instead of creating a set of ethics like Asimov’s laws of robotics…”

“Which were intentionally fictional,” I interrupted.

“Which were intentionally fictional, yes, Dad, thanks,” Richard agreed, nodding. “Instead of creating robotic rules based on written human values, the logic is that robots glean our values by observing them in practice. It’s not semantics. Any values we wrote in English would have to be translated into programming code the robots would understand anyway. So reverse engineering makes a lot more sense.”

I watched Spat as he sliced an onion, his movements quick and fluid like a trained chef’s. “Still sounds hard.”

“It is hard,” said Richard. “But we base our algorithms and testing on a concept called degeneracy, which is the existence of a large set of reward functions for which an observed action or policy is optimal.3 This aids our testing and heuristics to refine a robot’s behavior until it overtly maps to a human value we can recognize.”

I squinted. “Want to ungeekify that for me, Ricky?”

He frowned. “Don’t call me Ricky, Dad, or I’ll program your toaster to kill you while you sleep.”

I laughed. “You can do that?” I smiled for a second, and then pictured a malevolent toaster in my bed like the horsehead scene from The Godfather. “Seriously, can you do that?”

“Anyway, the point is we reward the robots for behaviors that reflect any human values we’re trying to implement into their programming. That way we can reverse-engineer preexisting code written in the manufacturer’s language to dovetail with our patented ethical protocols.”

“Well, that’s freaking smart,” I said.

“Yeah, it’s pretty cool.”

A mewing sound came from the speakers above our heads and we both turned as a fake cat entered the room near Spat.

“Thing looks like a furry Roomba,” I said.

“That’s what it is,” said Richard. “Has a pretty basic algorithm based on a cat’s movements. It doesn’t need to look realistic for Spat. We just have to get him used to the presence of pets since so many seniors have them.”

I nodded as we watched. Spat had finished chopping vegetables and I could smell the onions through a series of vents in the glass. He put a cover over the steaming vegetables and made his way to the refrigerator. The catbot got in his way, and Spat moved around him gingerly but with purpose. Spat opened the fridge and began pushing items out of the way, apparently looking for an ingredient.

“What’s he looking for?” I asked.

“Chicken,” said Richard. “This is one of the first tests for this new model. We want to see what the robot will do when it’s confronted with data it wasn’t expecting. In this case, when it chose its menu it scanned the smart fridge and saw the bar code of some chicken breasts we had in the freezer. So it chose the curry recipe based on that information. But one of my colleagues just removed the chicken a few minutes ago so now Spat has to update his algorithms in real time to satisfy his programming objective. In fact, my colleague removed all meat and tofu from the fridge and freezer, so the challenge is quite significant.”

“What does that have to do with ethics?”

“Not sure yet.” Richard looked at me. “It’s more about taking an action that could reflect a value of some kind. But something always happens that gives us an insight in that direction.”

“Cool.” I noticed the catbot bumping against Spat’s leg. “What about Kittybot? Is he trying to get Spat irritated? That could have moral implications.”

“Robots don’t get irritated, Dad. Just disoriented. But yes, we’re trying to see how Spat will react with multiple standard scenarios.”

The catbot extended a fake paw and began dragging it across the base of Spat’s frame. In response, Spat closed the freezer door and moved toward a nearby cabinet, where he retrieved a can of cat food. He moved toward a kitchen drawer and pulled out a can opener, deftly holding it in his viselike claw. In three rapid twists, he opened the can, dropping the lid in the trash. Moving to get a bowl for the cat food, Spat held the can at his eye level for a long moment.

“Shit,” said Richard, tapping his tablet quickly.

“What? He’s reading the ingredients. Why, so he can make sure it’s cat food and not poison or whatever?” I asked.

“Sure, but that’s a simple check with bar codes or Beacon technology. The bigger test is we made sure this cat food is made largely from chicken. We want to see if Spat knows not to use it in his curry recipe since we took his other chicken away.”

“Oh,” I said. “Yeah, that would be less than savory. Not a fan of curry kibble.”

We watched as Spat stayed motionless for another moment before reaching to get a bowl. He grabbed a spoon from a drawer and scooped out the cat food, placing the bowl next to the mewing Roomba. The robot hovered over the bowl like a real cat, staying in place while Spat moved back to the stove. By this point, the cooking vegetables smelled very fragrant through the vents, and the glass lid on the stove was completely clouded over with steam. My stomach made a small churning sound.

“No chicken, right?” I asked Richard.

He was chewing on his thumbnail, scrutinizing Spat. “No chicken,” he replied, not looking at me. “Right now, Spat is reaching out to neighbor cooking bots to see if they have any chicken, as well as figuring out delivery times from FreshDirect self-driving cars or drones. But we timed this perfectly as a challenge scenario since this type of thing could happen in people’s homes.”

We kept watching. While Spat didn’t actually move, I felt like I could see him grow tenser as the seconds clicked by. I knew he was a machine, but I also felt for the guy. Designed to be a chef, he risked screwing up a good curry and angering his owner.

A timer chimed in the kitchen, indicating that the chicken for the recipe should be placed on a skillet Spat had already pre-warmed and oiled. Quickly rotating 180 degrees, Spat bent at the waist and scooped up the mewing catbot, interrupting his eating. In one fluid motion, Spat placed the Roomba-cat on the red-hot pan, eliciting a horrified shriek from the fake animal in the process. Smoke poured from the stove, setting off an alarm and emergency sprinklers. A red lightbulb snapped on above our heads and Spat stopped moving as the scenario was cut short.

“Damn it!” said Richard, punching the glass.

“Why did he do that?” I asked. “Curried Roomba sounds pretty nasty.”

Richard rubbed at his temples. “Spat reads him as a real cat, Dad. It means he saw the cat as a source of meat to use in his recipe.”

“Ah.” I sniffed, the scent of smoke and vegetables permeating the room. “You never know. Probably tastes like chicken.”

——–

Endnotes

1) Anthony Wing Kosner, “Tech 2015: Deep Learning and Machine Intelligence Will Eat the World,” Forbes, December 29, 2014, https://www.forbes.com/sites/anthonykosner/2014/12/29/tech-2015-deep-learning-and-machine-intelligence-will-eat-the-world

2)  Andrew Y. Ng and Stuart Russell, “Algorithms for Inverse Reinforcement Learning,” Computer Science Division, UC Berkeley, 1999, https://ai.stanford.edu/~ang/papers/icml00-irl.pdf 

3) Ibid.


Posted

in

by