Interview: Author and Hieroglyph Community Member John C. Havens

John C. Havens is a Hieroglyph community member, a contributor to Mashable and The Guardian and the author of the book Hacking Happiness. I had a chance to read John’s new book, Heartifical Intelligence (to be published February 2016 by Tarcher/Penguin) and chat with him about his work studying the intersection of emerging technology and personal well-being.

Note: This interview has been edited for length and clarity.

Bob Beard, Project Hieroglyph: How did you find Project Hieroglyph, and what are your expectations for the project?

John C. HavensJohn C. Havens: I’m a huge science fiction fan – which, if you have any self-respect, means that you’re a huge Neal Stephenson fan. Snow Crash is a seminal piece of any nerd’s Bible. When I encountered Hieroglyph I thought, what a fantastic idea to harness the power of storytelling to imagine utopian future visions and then create a pragmatic roadmap to which people can contribute. So instead of just wistfully talking in an esoteric, albeit well-meaning way about how the future could look, why not build stories with the people that are equipped to help make that future a reality? I found that extremely exciting.

BB: What are your expectations for this community, and what would you like to see grow out of it?

JCH: What I’m enjoying is thinking through how stories lead to pragmatic change. So I hope that it continues to be not just amazing stories by Neal Stephenson and other writers of his caliber, but also an exploration of how we can create real pathways to positive futures in the minds of readers.

BB: I think you’re doing that yourself in Heartificial Intelligence; I appreciated one of your descriptors for the book, saying it’s not so much about science fiction as it is about science inflection – essentially using storytelling as a collaborative act in designing the future.

JCH: Thank you very much. I hate calling myself a futurist, although I appreciate the term. I call myself a geeky presentist because of what I know about technologies that already exist, but just aren’t ubiquitous yet. For example, you have Google working on self-driving cars and robots that are entering our homes and advances in AI – these are three separate things – but if you think on the level of mass production and of the embeddedness of technology and culture, those three things are naturally going to come together at some point. Telling stories about possible futures is a way of making connections and saying, “hey, do you see these patterns that I see?”

BB: You frame the book in two different ways. There are certainly some positive vignettes about living with technology, but you have also written some very dark futures that could come to pass if we don’t make conscious, thoughtful choices today. Do you see the future as inescapably apocalyptic if we don’t make these changes? Is that the default?

I’m not anti-tech, but what I am emphatic about telling people is that it is ignorant – and I don’t mean stupid – to ignore the research that shows that when we interact with a device, we lose human qualities that cannot be regained. So if we choose to only spend time with robots, then our human empathy will erode. Our relationship with technology is changing how we interact with other humans; as a result, some of our human qualities are atrophying.

And what we cannot ignore is the underlying fact of how our personal data is analyzed, shared, tracked, mined, and used. A standard terms and conditions agreement for companion robots like Buddy and Pepper is likely not enough to inform buyers about the hardware used to analyze and affect their emotions. In a very real sense, the manufacturers can control how we respond to the robots, effectively manipulating our emotions based on their biases. That’s a pivotal part of the conversation. It’s not privacy; it’s about controlling your identity. It’s not just about money and people manipulating you to buy something. It’s about regaining a space where I don’t have fifty different algorithms telling me what I want.

BB: So where is that space?

JCH: Well there’s a technical answer and an aspirational answer.

Technically, a person could have what’s known as a personal cloud. This has been around for years; it’s a concept called privacy by design, and it simply means that instead of one’s data being fragmented in two or three hundred different places, we have a locus of identity where we can define ourselves. Technologically, a personal cloud structure is pretty doable. There are companies like Personal.com and others in Europe where you’re able to take all your data and set up a dashboard of permissions, specifying who can access it and when. The main reason that’s so hard is that it’s not in Facebook or Google or IBM or LinkedIn’s interest to have you do that, because right now your personal data is a commodity that they use to generate revenue.

Aspirationally, a lot of Heartificial Intelligence is about what I think is a real positive force right now in the AI world: the field of ethics. I didn’t study it in college, so at first it seemed very general and vague to me – I pictured Socrates wearing a robe or Monty Python sketches about philosophers playing soccer. But what I’ve realized is that applied ethics means asking tough questions about people’s values and about their individual choices. A lot of these personalization algorithms are trying to discover what individuals say will make their lives better, so in one sense it hinges on the values. AI manufacturers currently use sensors to observe human behavior and that’s the tracking methodology online and off, and that’s great. There’s a massive wealth of information being generated about our lives, but it doesn’t involve the individual subjectively saying what they feel or think. It only involves the external objective side of things.

The computer scientist Stuart Russell uses a methodology called inverse reinforcement learning. What he does that most AI manufacturers don’t is when a device or sensors observe a human doing something for a while, the pattern recognition comes back and says, “here’s the pattern,” but then that’s examined further to say, “what human value does this pattern reflect?” I talk about this in the book [Editor’s note – check out an excerpt here]: if a kitchen robot was being created for Cuisinart and it could cook 10,000 recipes, that would be great. But if the robot was trained to have chicken in a recipe and it couldn’t find it, then you have to make sure to program the robot not to make a substitution and cook a pet cat. That’s the kind of substitution that doesn’t align with human values, but the robot needs to be taught that explicitly.

So the practice of applied ethics requires that you take a step back and say, “As we create this product using this algorithm, we cannot ignore the values of the end-user, because those values will define the efficacy and success of what we’re creating.” An increased focus on applied ethics will also help engineers and manufacturers who are often demonized, because they’re not trained to be ethicists.

BB: You write in the book that our “future happiness is dependent on teaching our machines what we value the most.”

JCH: The central question of the book is, “How will machines know what we value if we don’t know ourselves?” Near the end of the book there is a values tracking assessment that I created with a friend, who’s a PhD in positive psychology. We examined different psychological studies that have been done over the years and found that there are twelve values that are common all around the world, across multiple cultures, to both men and women. These are concepts like family, art, education, etc. It’s not that you and I will see those things the same way, but that’s the point.

What I’m encouraging people to do is identify and codify the values that animate their lives, because positive psychology research is showing that if you don’t live according to your values every day, your happiness decreases. And the relationship to AI is – news flash – that a kajillion iterative things are all measuring our values right now, based solely on our externalized behaviors, which are aggregated and analyzed without our input. Without humans in the mix to determine what values mean in a human context, the algorithms will assign us “values” of their own. My position is that we owe it to ourselves to catch up.

BB: So is the values tracking exercise an information audit? An attempt to be more mindful about the elements of our digital personas that we share with the machines?

JCH: Yes, and before the tech there’s a self-help aspect to it. However, if I can get my values codified, and that data was protected, and I felt comfortable sharing it in the privacy by design format we discussed earlier, then I end up with values by design, whereby any digital object in the virtual or real world would know to respond to my values in ways that are granular and utterly relevant to me.

There’s a great engineer and philosopher named Jason Millar who wrote about this idea of moral proxies. In the same way medical proxies dictate treatment methods based on someone’s ethical preferences, a moral proxy might enable devices to act on your behalf based on your values. A self-driving car that drives to my house in ten years is the same hardware and structure that’s going to go in front of your house. But through geo-fencing or whatever technology, as I walk towards the car, the car will read through protocols of my values. So it will know, for instance, how it should perform based on my values – and within in the legal framework of what it can do in that particular city or country. For example, people might be prioritized based on their religious preferences – perhaps an Orthodox Jewish family would be allowed to use the fast lane to beat the clock before the Sabbath begins….

My hope is that large brands and organizations will encourage values by design, not only because they can sell more effectively or build more trust with individual consumers, but also to avoid legal culpability. However, my whole point is that individuals should be given clarity and assurance around their data and how it’s being used. They should be encouraged to track their values so that they have a subjective way of saying “this is who I am” before they are so objectified to the point where preferential algorithms will become redundant because we won’t have any preferences that we’ve created on our own.

John’s book Heartificial Intelligence is excerpted here and will be published in February 2016 by Tarcher/Penguin. You can find John in the Hieroglyph forums at @johnchavens.

 


Posted

in

by