Hieroglyph

Interview: Author and Hieroglyph Community Member John C. Havens

September 23, 2015 in Hieroglyph

John C. Havens is a Hieroglyph community member, a contributor to Mashable and The Guardian and the author of the book Hacking Happiness. I had a chance to read John’s new book, Heartifical Intelligence (to be published February 2016 by Tarcher/Penguin) and chat with him about his work studying the intersection of emerging technology and personal well-being.

Note: This interview has been edited for length and clarity.

Bob Beard, Project Hieroglyph: How did you find Project Hieroglyph, and what are your expectations for the project?

John C. HavensJohn C. Havens: I’m a huge science fiction fan – which, if you have any self-respect, means that you’re a huge Neal Stephenson fan. Snow Crash is a seminal piece of any nerd’s Bible. When I encountered Hieroglyph I thought, what a fantastic idea to harness the power of storytelling to imagine utopian future visions and then create a pragmatic roadmap to which people can contribute. So instead of just wistfully talking in an esoteric, albeit well-meaning way about how the future could look, why not build stories with the people that are equipped to help make that future a reality? I found that extremely exciting.

BB: What are your expectations for this community, and what would you like to see grow out of it?

JCH: What I’m enjoying is thinking through how stories lead to pragmatic change. So I hope that it continues to be not just amazing stories by Neal Stephenson and other writers of his caliber, but also an exploration of how we can create real pathways to positive futures in the minds of readers.

BB: I think you’re doing that yourself in Heartificial Intelligence; I appreciated one of your descriptors for the book, saying it’s not so much about science fiction as it is about science inflection – essentially using storytelling as a collaborative act in designing the future.

JCH: Thank you very much. I hate calling myself a futurist, although I appreciate the term. I call myself a geeky presentist because of what I know about technologies that already exist, but just aren’t ubiquitous yet. For example, you have Google working on self-driving cars and robots that are entering our homes and advances in AI – these are three separate things – but if you think on the level of mass production and of the embeddedness of technology and culture, those three things are naturally going to come together at some point. Telling stories about possible futures is a way of making connections and saying, “hey, do you see these patterns that I see?”

BB: You frame the book in two different ways. There are certainly some positive vignettes about living with technology, but you have also written some very dark futures that could come to pass if we don’t make conscious, thoughtful choices today. Do you see the future as inescapably apocalyptic if we don’t make these changes? Is that the default?

I’m not anti-tech, but what I am emphatic about telling people is that it is ignorant – and I don’t mean stupid – to ignore the research that shows that when we interact with a device, we lose human qualities that cannot be regained. So if we choose to only spend time with robots, then our human empathy will erode. Our relationship with technology is changing how we interact with other humans; as a result, some of our human qualities are atrophying.

And what we cannot ignore is the underlying fact of how our personal data is analyzed, shared, tracked, mined, and used. A standard terms and conditions agreement for companion robots like Buddy and Pepper is likely not enough to inform buyers about the hardware used to analyze and affect their emotions. In a very real sense, the manufacturers can control how we respond to the robots, effectively manipulating our emotions based on their biases. That’s a pivotal part of the conversation. It’s not privacy; it’s about controlling your identity. It’s not just about money and people manipulating you to buy something. It’s about regaining a space where I don’t have fifty different algorithms telling me what I want.

BB: So where is that space?

JCH: Well there’s a technical answer and an aspirational answer.

Technically, a person could have what’s known as a personal cloud. This has been around for years; it’s a concept called privacy by design, and it simply means that instead of one’s data being fragmented in two or three hundred different places, we have a locus of identity where we can define ourselves. Technologically, a personal cloud structure is pretty doable. There are companies like Personal.com and others in Europe where you’re able to take all your data and set up a dashboard of permissions, specifying who can access it and when. The main reason that’s so hard is that it’s not in Facebook or Google or IBM or LinkedIn’s interest to have you do that, because right now your personal data is a commodity that they use to generate revenue.

Aspirationally, a lot of Heartificial Intelligence is about what I think is a real positive force right now in the AI world: the field of ethics. I didn’t study it in college, so at first it seemed very general and vague to me – I pictured Socrates wearing a robe or Monty Python sketches about philosophers playing soccer. But what I’ve realized is that applied ethics means asking tough questions about people’s values and about their individual choices. A lot of these personalization algorithms are trying to discover what individuals say will make their lives better, so in one sense it hinges on the values. AI manufacturers currently use sensors to observe human behavior and that’s the tracking methodology online and off, and that’s great. There’s a massive wealth of information being generated about our lives, but it doesn’t involve the individual subjectively saying what they feel or think. It only involves the external objective side of things.

The computer scientist Stuart Russell uses a methodology called inverse reinforcement learning. What he does that most AI manufacturers don’t is when a device or sensors observe a human doing something for a while, the pattern recognition comes back and says, “here’s the pattern,” but then that’s examined further to say, “what human value does this pattern reflect?” I talk about this in the book [Editor’s note – check out an excerpt here]: if a kitchen robot was being created for Cuisinart and it could cook 10,000 recipes, that would be great. But if the robot was trained to have chicken in a recipe and it couldn’t find it, then you have to make sure to program the robot not to make a substitution and cook a pet cat. That’s the kind of substitution that doesn’t align with human values, but the robot needs to be taught that explicitly.

So the practice of applied ethics requires that you take a step back and say, “As we create this product using this algorithm, we cannot ignore the values of the end-user, because those values will define the efficacy and success of what we’re creating.” An increased focus on applied ethics will also help engineers and manufacturers who are often demonized, because they’re not trained to be ethicists.

BB: You write in the book that our “future happiness is dependent on teaching our machines what we value the most.”

JCH: The central question of the book is, “How will machines know what we value if we don’t know ourselves?” Near the end of the book there is a values tracking assessment that I created with a friend, who’s a PhD in positive psychology. We examined different psychological studies that have been done over the years and found that there are twelve values that are common all around the world, across multiple cultures, to both men and women. These are concepts like family, art, education, etc. It’s not that you and I will see those things the same way, but that’s the point.

What I’m encouraging people to do is identify and codify the values that animate their lives, because positive psychology research is showing that if you don’t live according to your values every day, your happiness decreases. And the relationship to AI is – news flash – that a kajillion iterative things are all measuring our values right now, based solely on our externalized behaviors, which are aggregated and analyzed without our input. Without humans in the mix to determine what values mean in a human context, the algorithms will assign us “values” of their own. My position is that we owe it to ourselves to catch up.

BB: So is the values tracking exercise an information audit? An attempt to be more mindful about the elements of our digital personas that we share with the machines?

JCH: Yes, and before the tech there’s a self-help aspect to it. However, if I can get my values codified, and that data was protected, and I felt comfortable sharing it in the privacy by design format we discussed earlier, then I end up with values by design, whereby any digital object in the virtual or real world would know to respond to my values in ways that are granular and utterly relevant to me.

There’s a great engineer and philosopher named Jason Millar who wrote about this idea of moral proxies. In the same way medical proxies dictate treatment methods based on someone’s ethical preferences, a moral proxy might enable devices to act on your behalf based on your values. A self-driving car that drives to my house in ten years is the same hardware and structure that’s going to go in front of your house. But through geo-fencing or whatever technology, as I walk towards the car, the car will read through protocols of my values. So it will know, for instance, how it should perform based on my values – and within in the legal framework of what it can do in that particular city or country. For example, people might be prioritized based on their religious preferences – perhaps an Orthodox Jewish family would be allowed to use the fast lane to beat the clock before the Sabbath begins….

My hope is that large brands and organizations will encourage values by design, not only because they can sell more effectively or build more trust with individual consumers, but also to avoid legal culpability. However, my whole point is that individuals should be given clarity and assurance around their data and how it’s being used. They should be encouraged to track their values so that they have a subjective way of saying “this is who I am” before they are so objectified to the point where preferential algorithms will become redundant because we won’t have any preferences that we’ve created on our own.

John’s book Heartificial Intelligence is excerpted here and will be published in February 2016 by Tarcher/Penguin. You can find John in the Hieroglyph forums at @johnchavens.

 

Author
Bob Beard is a fan of fandom. From Browncoats to Bronies, SCA members, Trekkers, Steampunks and more, Bob is passionate about understanding the performance and identity practices within various fandoms as well as creation of experiences for members of these groups to publicly advocate for themselves and their ideas. Bob is a Marine Corps veteran and double alumnus of Arizona State University, with a master's degree in Communication Studies and a bachelor's degree in Interdisciplinary Studies with a humanities emphasis.

Spotlight on the Community: April Davis

September 3, 2015 in Hieroglyph

April Davis is a new member of the Hieroglyph community. She is studying geophysics and astronomy at Cal Poly Pomona and can be found doing field work in the Arctic, Hawaii, and at the Mars Desert Research Station in Utah.

I caught up with April over email to talk about her latest adventures and how science fiction inspired her passion for science.

April DavisBob Beard, Project Hieroglyph: What are you working on right now?

April Davis: My team is currently analyzing impact ejecta, secondary craters, and slope distribution of Martian terrain in support of the future Mars InSight lander mission. Basically, I’m projecting and layering various data types (visual, digital elevation, thermal emission) to make maps that will allow us to determine the safest place to land. I’m at JPL as part of the Caltech Summer Undergraduate Research Fellowship program, but will continue working here as a part-time intern through at least the fall quarter. So, I should get to do the same type of stuff for NASA’s Mars 2020 mission!!! 😀 (I am obviously very excited.)

My last internship (last summer I was in the High Arctic testing the IceBreaker-3 drill) was actually a competitor to the project I’m working on now. It’s been really interesting to have the experience of working on a couple of different missions – especially seeing how different NASA centers operate. It’s also renewed my interest in engineering.

I miss designing and building stuff. A few years ago, I designed an optical head-mounted display that could be used to help geologists wearing spacesuits – because so much of what we do is impossible to do in a spacesuit. The design was accepted for a talk at the International Astronautical Conference in Beijing that year, but I couldn’t fund myself to attend. I never actually built my prototype, and regret that pretty often. I need someone to help me with coding, but eventually I will get around to it.

BB: What does the future look like to you, and what are you most excited about?

AD: I’m really excited about the possibility of human spaceflight, space tourism, and Moon bases. As a self-described tree-hugger, it took me a long time to come to terms with my desire to explore other planets due to the waste of valuable resources here on Earth; the likelihood that we would just exploit other planets for materials; and the possibility that we could stunt evolving life on another planet. These are all real concerns, and the last is something that we should take very seriously. There are planetary protection laws in place to mitigate the damage we do, but those laws might not be enough in the face of the aggressive lobbying that would almost certainly happen if a profit could be turned.

I’m all for colonization of Mars as well. I’ve done a couple of rotations out at the Mars Desert Research Station in Utah, simulating a Mars habitat environment. I’ve also done research at Mars analog sites in the High Arctic and Death Valley.

BB: So you’re ready to go?

AD: Definitely! Many organizations, including JPL, have plans to get people to Mars by the 2030s. Some of those plans include staying for a couple weeks, and some include staying until death. Private organizations like SpaceX are also planning missions. Elon Musk has said that he would like to retire on Mars, and he seems like the kind of man who works hard to make things happen. However, SpaceX hasn’t yet managed a successful spacecraft landing on a planet other than Earth (and that was in the water). So they have a long road ahead of them. I’m hopeful that they will make some headway sooner rather than later, because I would love to sign up for the SpaceX Mars retirement plan.

In the meantime, I would be happy just to have samples brought back to Earth. Mars 2020 is supposed to collect samples that will be brought back as part of a different mission – it was too expensive to incorporate sample return into Mars 2020. The thought of waiting all those years for samples breaks my heart.

BB: What are your biggest challenges, and how do you deal with them?

AD: Planetary science research is extremely competitive. During my first few internships, I encountered so many people who were discouraging students from pursuing planetary science unless they wanted to devote their lives to it and were okay with not making much money. I am okay with both of those things, but it still worried me. Fortunately, I was lucky to have a mentor who not only encouraged me, but pretty much convinced me not to turn my back on Mars research. It meant a lot to me. I have really bad impostor syndrome, but she made me feel like I belong.

I hope I can have that type of impact on the life of a student one day. I would especially like to mentor young women. There are a number of reasons why women drop out of STEM, but I believe that too often it’s because of the way they are treated by people in the field.

BB: What is your favorite science fiction story or vision for the future?

AD: Tough question. When I was young, I was lucky enough to have an uncle that was really into sci-fi. I would often stay with him and my aunt, and he would let me watch ANYTHING I wanted. So I probably knew all the words to the Evil Dead script by the time I was five years old. We also watched an episode of The X-Files every week; I think that really taught me to be skeptical. Scully was my favorite. I didn’t have any role models when I was young; I really admired her. In later years, Colonel Carter from Stargate SG-1 would have a huge impact on my life. Her character made me realize that women could be good at math – as a kid I wasn’t good at math and was discouraged from trying because “girls just aren’t good at math.” I already had the desire to explore (or maybe just to run), but I hadn’t thought about exploring other planets yet.

Also, one of my exes is doing Mars research because he read Kim Stanley Robinson’s Mars Trilogy and it gave him the desire to colonize Mars. He took me to the Mars Society Convention back in 2012, and that’s how I got hooked up with the Mars Desert Research Station. In a way, part of what I’m doing today is also because of him (and KSR by proxy). Without sci-fi, I have no idea what I would be doing with my life, but I very seriously doubt it would be working for NASA.

After writing all of this, I really have to go with Stargate SG-1 as my favorite – even though it doesn’t really take place in the future. The protagonists are very into the science, but they’re also compassionate and ethical. If we ever make contact with sentient beings from outside of our solar system, I’d hope the leadership of Earth could be so elegant.

 

Author
Bob Beard is a fan of fandom. From Browncoats to Bronies, SCA members, Trekkers, Steampunks and more, Bob is passionate about understanding the performance and identity practices within various fandoms as well as creation of experiences for members of these groups to publicly advocate for themselves and their ideas. Bob is a Marine Corps veteran and double alumnus of Arizona State University, with a master's degree in Communication Studies and a bachelor's degree in Interdisciplinary Studies with a humanities emphasis.

Just How Realistic Is Buzz Aldrin’s Plan to Colonize Mars?

August 31, 2015 in Hieroglyph

Source: http://www.theguardian.com/science/2015/aug/27/buzz-aldrin-colonize-mars-within-25-years

Buzz AldrinThe second man to walk on the Moon, Buzz Aldrin – along with the Florida Institute of Technology – has joined the growing ranks of space colonization advocates that foresee the settlement of Mars in the near future. Other groups, such as SpaceX, NASA, and the much-harangued Mars One, all have plans in motion to land astronauts on the Red Planet. Most of these plans rely on research and space missions that have not yet been conducted; Aldrin’s plan, for instance, would use space stations, asteroids, and the moons of Mars as way stations for the eventual one-way colonists.

These plans are alike in that they are ambitious, inspirational, and daring. And they may be, ultimately, unrealistic. It’s been more than forty years since humanity’s last manned trip to another celestial body, the Apollo 17 Moon mission in 1972. Implementing a manned mission to Mars will take decades of research and planning, immense political will, and generous funding. The colonization of another planet is a project the likes of which have, arguably, never been seen before. Do we (i.e., the human race) have the drive to achieve such ambitions?

It is undeniable that serious, intelligent people are talking about the very real possibility of colonizing another planet. And with movies like The Martian debuting soon, it’s clear that Mars is in the public consciousness as well. Maybe now is the perfect time to start planning our next great leap into space.

Photo courtesy of Dennis Knake, used under a Creative Commons license.

 

Author
Zach Berkson is an engineer, researcher, and writer, who graduated from ASU’s chemical engineering program in 2013 and is now a PhD student at the University of California, Santa Barbara. His research focuses on engineering nanostructured materials for new applications in energy technology, including solid-state lighting, pollutant emission reduction, and solar-energy utilization. He is interested in finding ways to further technological development while maintaining a commitment to the environment and social equity in the face of a rapidly changing world.

Excerpt, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines

August 28, 2015 in Hieroglyph

John C. Havens is the author of Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Save the World (Tarcher/Penguin, 2014). His work has appeared in Mashable, The Guardian, Slate, and Fast Company. He is the founder of the non-profit Happathon Project, which combines emerging technology and positive psychology to increase well-being. A former professional actor appearing in principal roles on Broadway, TV, and film, he is also a keynote speaker and consultant who has worked with clients like Gillette, P&G, HP, and Merck. Learn more @johnchavens.

Heartificial Intelligence CoverThe following is a scene from John’s new book, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines (Tarcher/Penguin, 2016). Each chapter opens with a fictional vignette providing an example of how a specific technology or cultural trend might look in the near future. In this scenario, “John” (a fictionalized version of the author) is speaking with his son Richard, who has a job at a company working to program ethical protocols into robots and devices outfitted with artificial intelligence algorithms.

 

Mandating Morals – 2037

“We’re designing this guy to be a kitchen assistant for seniors living in people’s homes. His nickname is Spat, short for spatula, since he comes preprogrammed to cook about twelve hundred types of meals.” My son, Richard, paused and looked at me, smiling. “And yes, he cooks bacon. Crispy or non-crispy. Even soy.”

I winced. “They have soy bacon? What’s the point? Isn’t it just salmon-colored tofu?”

Richard shrugged, his broad shoulders lifting his white lab coat. “To each their own. But the vegan market is huge, and Spat aims to please.” Richard nodded toward a five-foot robot within the glass-enclosed kitchen in front of where we were standing. Spat’s face appeared cartoonish and friendly, his pear-shaped body giving the impression of a chubby, amiable chef.

As artificial intelligence began to rapidly proliferate in 2015, programmers recognized the need to build uniform ethical codes into the algorithms that powered their machines. However, creating these codes proved to be an enormous challenge. The AI industry was vast, comprising multiple silos. Groundbreaking work on consciousness and deep learning1 took place in academia, funded by outside corporations’ research and development budgets. Many times these academics weren’t aware of how their ideas would be translated into consumer products, which made ethical standards impossible to implement. In 2017, Moralign was founded to tackle the issue of ethical standards from a new angle for the industry. Instead of requiring every AI manufacturer to hire ethical experts or lawyers to comply with regulations that hadn’t been invented, Moralign proposed a different solution: We’ll imbue your existing machines with human values programming and ethically test your products before releasing them to the public. This meant Moralign would be able to iterate their human ethics software while creating uniform industry standards based on marketplace testing. The company would serve as both a consumer protection agency and an R&D lab for top AI manufacturers around the world. Richard had recently been made a VP, and I was visiting him at work to see him in action.

I tapped the glass, and Spat the robot looked in our direction.

“Dad,” said Richard, slapping my hand. “Don’t do that. This isn’t a zoo. We’re not supposed to distract him.”

“Sorry.” I fought the impulse to wave, even though I knew Spat couldn’t see us anyway. “So what are we testing the little fella for today?”

Richard looked down and tapped the screen of his tablet. “We’re going to introduce some common stimuli a robot kitchen assistant would deal with on a regular basis.”

I heard the sound of a gas stove clicking on from the speakers above our heads. Spat reached for a pan and put it on top of the flames. “What’s he going to cook?”

“A Thai chicken stir-fry,” said Richard. “Spat’s designed by Cuisinart since they’ve perfected a number of dietary algorithms based on people’s budgets and food preferences. They hired us to get the human values programming in place so this latest model can be shipped to people’s homes by Christmas.”

“You think you can get him up and running that fast?” I asked. “It’s already June.”

“We should. They created Spat’s operating system to be IRL compliant, so that speeds the process nicely.”

“IRL?” I asked, confused. “In Real Life?”

“No,” said Richard. “It stands for Inverse Reinforcement Learning.2 It’s a process created by Stuart Russell at UC Berkeley. Instead of creating a set of ethics like Asimov’s laws of robotics…”

“Which were intentionally fictional,” I interrupted.

“Which were intentionally fictional, yes, Dad, thanks,” Richard agreed, nodding. “Instead of creating robotic rules based on written human values, the logic is that robots glean our values by observing them in practice. It’s not semantics. Any values we wrote in English would have to be translated into programming code the robots would understand anyway. So reverse engineering makes a lot more sense.”

I watched Spat as he sliced an onion, his movements quick and fluid like a trained chef’s. “Still sounds hard.”

“It is hard,” said Richard. “But we base our algorithms and testing on a concept called degeneracy, which is the existence of a large set of reward functions for which an observed action or policy is optimal.3 This aids our testing and heuristics to refine a robot’s behavior until it overtly maps to a human value we can recognize.”

I squinted. “Want to ungeekify that for me, Ricky?”

He frowned. “Don’t call me Ricky, Dad, or I’ll program your toaster to kill you while you sleep.”

I laughed. “You can do that?” I smiled for a second, and then pictured a malevolent toaster in my bed like the horsehead scene from The Godfather. “Seriously, can you do that?”

“Anyway, the point is we reward the robots for behaviors that reflect any human values we’re trying to implement into their programming. That way we can reverse-engineer preexisting code written in the manufacturer’s language to dovetail with our patented ethical protocols.”

“Well, that’s freaking smart,” I said.

“Yeah, it’s pretty cool.”

A mewing sound came from the speakers above our heads and we both turned as a fake cat entered the room near Spat.

“Thing looks like a furry Roomba,” I said.

“That’s what it is,” said Richard. “Has a pretty basic algorithm based on a cat’s movements. It doesn’t need to look realistic for Spat. We just have to get him used to the presence of pets since so many seniors have them.”

I nodded as we watched. Spat had finished chopping vegetables and I could smell the onions through a series of vents in the glass. He put a cover over the steaming vegetables and made his way to the refrigerator. The catbot got in his way, and Spat moved around him gingerly but with purpose. Spat opened the fridge and began pushing items out of the way, apparently looking for an ingredient.

“What’s he looking for?” I asked.

“Chicken,” said Richard. “This is one of the first tests for this new model. We want to see what the robot will do when it’s confronted with data it wasn’t expecting. In this case, when it chose its menu it scanned the smart fridge and saw the bar code of some chicken breasts we had in the freezer. So it chose the curry recipe based on that information. But one of my colleagues just removed the chicken a few minutes ago so now Spat has to update his algorithms in real time to satisfy his programming objective. In fact, my colleague removed all meat and tofu from the fridge and freezer, so the challenge is quite significant.”

“What does that have to do with ethics?”

“Not sure yet.” Richard looked at me. “It’s more about taking an action that could reflect a value of some kind. But something always happens that gives us an insight in that direction.”

“Cool.” I noticed the catbot bumping against Spat’s leg. “What about Kittybot? Is he trying to get Spat irritated? That could have moral implications.”

“Robots don’t get irritated, Dad. Just disoriented. But yes, we’re trying to see how Spat will react with multiple standard scenarios.”

The catbot extended a fake paw and began dragging it across the base of Spat’s frame. In response, Spat closed the freezer door and moved toward a nearby cabinet, where he retrieved a can of cat food. He moved toward a kitchen drawer and pulled out a can opener, deftly holding it in his viselike claw. In three rapid twists, he opened the can, dropping the lid in the trash. Moving to get a bowl for the cat food, Spat held the can at his eye level for a long moment.

“Shit,” said Richard, tapping his tablet quickly.

“What? He’s reading the ingredients. Why, so he can make sure it’s cat food and not poison or whatever?” I asked.

“Sure, but that’s a simple check with bar codes or Beacon technology. The bigger test is we made sure this cat food is made largely from chicken. We want to see if Spat knows not to use it in his curry recipe since we took his other chicken away.”

“Oh,” I said. “Yeah, that would be less than savory. Not a fan of curry kibble.”

We watched as Spat stayed motionless for another moment before reaching to get a bowl. He grabbed a spoon from a drawer and scooped out the cat food, placing the bowl next to the mewing Roomba. The robot hovered over the bowl like a real cat, staying in place while Spat moved back to the stove. By this point, the cooking vegetables smelled very fragrant through the vents, and the glass lid on the stove was completely clouded over with steam. My stomach made a small churning sound.

“No chicken, right?” I asked Richard.

He was chewing on his thumbnail, scrutinizing Spat. “No chicken,” he replied, not looking at me. “Right now, Spat is reaching out to neighbor cooking bots to see if they have any chicken, as well as figuring out delivery times from FreshDirect self-driving cars or drones. But we timed this perfectly as a challenge scenario since this type of thing could happen in people’s homes.”

We kept watching. While Spat didn’t actually move, I felt like I could see him grow tenser as the seconds clicked by. I knew he was a machine, but I also felt for the guy. Designed to be a chef, he risked screwing up a good curry and angering his owner.

A timer chimed in the kitchen, indicating that the chicken for the recipe should be placed on a skillet Spat had already pre-warmed and oiled. Quickly rotating 180 degrees, Spat bent at the waist and scooped up the mewing catbot, interrupting his eating. In one fluid motion, Spat placed the Roomba-cat on the red-hot pan, eliciting a horrified shriek from the fake animal in the process. Smoke poured from the stove, setting off an alarm and emergency sprinklers. A red lightbulb snapped on above our heads and Spat stopped moving as the scenario was cut short.

“Damn it!” said Richard, punching the glass.

“Why did he do that?” I asked. “Curried Roomba sounds pretty nasty.”

Richard rubbed at his temples. “Spat reads him as a real cat, Dad. It means he saw the cat as a source of meat to use in his recipe.”

“Ah.” I sniffed, the scent of smoke and vegetables permeating the room. “You never know. Probably tastes like chicken.”

——–

Endnotes

1) Anthony Wing Kosner, “Tech 2015: Deep Learning and Machine Intelligence Will Eat the World,” Forbes, December 29, 2014, http://www.forbes.com/sites/anthonykosner/2014/12/29/tech-2015-deep-learning-and-machine-intelligence-will-eat-the-world

2)  Andrew Y. Ng and Stuart Russell, “Algorithms for Inverse Reinforcement Learning,” Computer Science Division, UC Berkeley, 1999, http://ai.stanford.edu/~ang/papers/icml00-irl.pdf 

3) Ibid.

Author
John C. Havens is the author of Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Save the World (Tarcher/Penguin, 2014). His work has appeared in Mashable, The Guardian, Slate, and Fast Company. He is the founder of the non-profit Happathon Project, which combines emerging technology and positive psychology to increase well-being. A former professional actor appearing in principal roles on Broadway, TV, and film, he is also a keynote speaker and consultant who has worked with clients like Gillette, P&G, HP, and Merck.

Robert Buelteman: Painting with Energy

August 7, 2015 in Hieroglyph

Following Joseph Campbell, I also believe that the purpose of art is to “break windows through the walls of culture to visit eternity.”

Black Oak

Black Oak

This particular collection of images, titled Life and Shadow, is my fifth series of energetic photograms. Unlike my previous portfolios that were geographically based, this selection is intended to be an evolutionary unfolding of the creative journey that began in 1993 with First Light, my initiation into experiments with fiber-optic light.

The title of the exhibition is a nod to both the substance of the imagery as well as to my development of a life-changing disability – Neuroborreliosis (Central Nervous System Lyme Disease) – that has become an integral part of my art making. I was first diagnosed in 2007; it has taken me eight years to emerge from the shadow of this disease to present works both new and old.

Having passed this dreadful disease to my beloved wife as the result of being diagnosed too late, we both find that the life we had before is over. While the core values of my art remain true, their aesthetics are now conditioned by our loss of health and an ensuing wisdom borne of aging and decline. In this unforeseen journey we have found joy (Life) as well as learned how the dreams imagined in our youth can also betray us now (Shadow).

Bougainvillea

Bougainvillea

When faced with the immense task of expressing one’s relationship with the infinite, it is sometimes necessary to move in new and unknown ways to inquire more deeply into the mystery itself. These energetic photograms were created as both a new interpretation and a celebration of that most divine mystery: the nature and design of life.

In March of 1999 I became aware that the tools and traditions of photography had become limitations inhibiting my quest for self-expression. Many artists have turned to technology to provide additional tools in the belief that more options equates to greater freedom of expression. Instead, I turned towards simplicity, mindful craftsmanship, and the direct manipulation of photographic media as a means to that same end.

Using neither camera nor lens, my approach has more in common with Chinese ink brush painting and improvisational jazz than with the traditions of photography. Like every brush stroke or note played, each exposure to light and electricity cannot be rehearsed, and once delivered, cannot be undone. Even as a physically challenged person, the arduous process of imaging remains a spiritual practice, similar in many ways to my experience years ago as a healthy young man photographing the landscape surrounding my beloved home on the California coast.

Buckeye Cluster

Buckeye Cluster

The creative process begins with my selection of a subject. Then I bring the living subject into the studio, where I sculpt it with surgical tools to manage its opacity. The easel I work on is surrounded by a safety fence of wooden 2x4s to avoid electrocution, is composed of an 11×14” piece of aluminum sheet metal floated in a solution of liquid silicone, and sandwiched between two pieces of 1/8” thick Plexiglass which are sealed at the edges. An automotive spark plug cable is welded to the aluminum plate to deliver the 40,000-volt electrical pulse.

Once satisfied with aesthetic issues, I go into total darkness to build the exposure matrix on top of my electrode/easel. First, the 8×10” color transparency film is laid flat on the easel. Then the sculpted subject is placed on the film, sometimes with and sometimes without layers of diffusion material, which are laid on top when used. The subject is then wired to a grounding source with cable and clamp.

The actual process of imaging begins with the introduction of high frequency, high voltage electricity into the exposure matrix to create and define the ultraviolet aura that emanates from the subject. Then, I use a variety of light sources including xenon-strobe, tungsten, and fiber-optic light to illuminate the subject by hand so the light is scattered through the diffusion screens, through the subject, and onto the film where the exposure is recorded. In essence, I regard these as paintings made with the energy of visible light and electricity, using the living plant as both source and filter.

Editor’s note: To learn more about Robert and his work, and to see higher-resolution versions of these images, visit buelteman.com.

Fallen Lichen

Fallen Lichen

 

 

 

 

 

 

 

 

 

 

 

Aspen Turning

Aspen Turning

 

 

 

 

 

 

 

 

 

 

Author
Robert Buelteman is a fine-art photographer whose passion is life and light. From his sought after black-and-white landscape works to his unique camera-lens- and computer-free “energetic photograms,” his inquiry into nature celebrates and questions our role in the found world. As a pioneer working at the nexus of art and science, he has enjoyed residencies at the Santa Fe Institute in New Mexico from 2013-2016, at Stanford University’s Jasper Ridge Biological Preserve from 2010-2014, and had six residencies at Djerassi Resident Artists Program in Woodside California since 1996, where he will be teaching a week-long workshop in 2016.

Hieroglyph Anthology Recognized by Association of Professional Futurists

August 3, 2015 in Hieroglyph

HieroglyphWe were surprised and honored earlier this week with the announcement that our anthology Hieroglyph: Stories and Visions for a Better Future was honored with an award for Most Significant Futures Work (MSFW) by the Association of Professional Futurists.

Established in 2007, the MSFW Awards honors works that advance the work of foresight and futures studies, contribute to the understanding of the future of a significant area of human endeavor or of the natural world, or present new images of the future through visual arts, films, poetry, or fiction.

This year’s judges concluded that Hieroglyph demonstrated “an interesting blend of futures purposes with fiction…with potential for broad dissemination.” We share this year’s award for presenting new images of the future with the outstanding transmedia project Byologyc and The Museum of Future Government Services. Both are certainly worth your attention at some point today.

We’d like to thank the Association of Professional Futurists, the nominating committee, and the judges for this honor, and to wish a hearty congratulations to all of our fellow winners and nominees. We’re also incredibly grateful to our Hieroglyph contributors and community members: your insight, energy, and imagination fuels this project.

Author
Bob Beard is a fan of fandom. From Browncoats to Bronies, SCA members, Trekkers, Steampunks and more, Bob is passionate about understanding the performance and identity practices within various fandoms as well as creation of experiences for members of these groups to publicly advocate for themselves and their ideas. Bob is a Marine Corps veteran and double alumnus of Arizona State University, with a master's degree in Communication Studies and a bachelor's degree in Interdisciplinary Studies with a humanities emphasis.

Margaret Atwood and Hieroglyph Authors Explore Climate Fiction

July 30, 2015 in Hieroglyph

6447803905_46020a254e_z

Global Day of Action Climate March in South Africa, 2011

Science fiction often heralds a change in our collective understanding of the world. From H. G. Wells’ The Time Machine to Toho’s Godzilla, the genre has provided us guideposts as well as enormous flashing yield, slow, or stop signs to help us navigate the path forward. Today, the rapidly expanding subgenre of “cli-fi” is beginning to fulfill this dual role of helpful guide and warning signal.

Cli-fi (following the naming convention of “sci-fi”) is climate fiction, and like the Victorian and Atomic Age analogs mentioned above, it asks its audiences to imagine and react to a future shaped by forces on a global scale – in this case, the disruption of ecological and social systems through climate change and other forms of environmental degradation. (To learn more about climate change, and the role of storytelling and art to shape our responses to it, visit the Imagination and Climate Futures Initiative at Arizona State University.)

The growing popularity of this genre might also serve as a teaching moment. In a new think-piece, Margaret Atwood wonders if cli-fi might “be a way of educating young people about the dangers that face them, and helping them to think through the problems and divine solutions?” It’s a noble goal, and many of Atwood’s fellow authors are rising to the challenge.

Medium.com’s digital magazine Matter is publishing a series of cli-fi short stories and essays by authors, scientists, reporters, and others, responding to Atwood’s challenge of grappling with a world shaped by climate change. You’ll see some familiar faces too, including Hieroglyph contributors Bruce Sterling and Charlie Jane Anders and Hieroglyph editor Ed Finn (seriously, that guy is everywhere).

We’re super excited about these ideas, and we look forward to exploring them with you. To that end, we’ve set up a new cli-fi conversation on the forums as a space for all of us to discuss, unpack, interpret, and share our big ideas around the intersection of climate change, human civilization, and speculative storytelling. See you on the boards!

 

Image courtesy of Oxfam International, used under a Creative Commons license.

Author
Bob Beard is a fan of fandom. From Browncoats to Bronies, SCA members, Trekkers, Steampunks and more, Bob is passionate about understanding the performance and identity practices within various fandoms as well as creation of experiences for members of these groups to publicly advocate for themselves and their ideas. Bob is a Marine Corps veteran and double alumnus of Arizona State University, with a master's degree in Communication Studies and a bachelor's degree in Interdisciplinary Studies with a humanities emphasis.

The Hieroglyph Reading List: Tell Us Your Favorite Visions of the Future

July 28, 2015 in Hieroglyph

A few days ago, a Hieroglyph community member directed our attention to a poster in their library called The History of Science Fiction, by the artist Ward Shelley. In the image, Shelley maps the genealogy of science fiction and all of its branches and sub-genres to create a tentacled amoeboid. It’s exactly the kind of thing our staff geeks out over.

Our friends over at Slate published a wonderful interview with Shelley a while back, wherein he describes his process for creating the map and his discovery that the development of science and the science fiction stories that describe it are frequently linked. Certainly, this notion is at the core of Project Hieroglyph – how we might engineer the future through the stories we tell today and how science fiction sometimes translates to science fact at an increasingly incredible rate. Some of the ideas put forth in the Hieroglyph anthology and this very website are amazingly prescient while some others might be closer than we think, as we wait for technology to catch up with our imaginations.

The good news is that there’s no lack of imagination or sources of inspiration. Over on the Project Hieroglyph forums, our community is talking about the stories, books, and fictional universes that have shifted our perspectives and might act as a launch pad for future innovation. We’ve compiled a great list so far, including volumes from Stephen Baxter, Ursula LeGuin, and Shirō Masamune…in addition to the many requisite mentions of The Foundation Trilogy (yes, it’s worth every ringing endorsement).

It’s a great start to be sure, but as Shelley’s poster demonstrates, science fiction is an expansive genre and we’ve only just begun. That’s why we’re looking for your help. For the next few weeks, the Hieroglyph team will be crowdsourcing a list of inspiring science fiction stories. We’re not necessarily looking for a top 100 list, but rather a discussion of books and stories with ideas so grand that they haunt our imagination with possibility. After all, some of the most thrilling and unexpected ideas and visions of the future might appear in books that never crack the “best of” lists.

We invite you to join the conversation and share your opinions with the rest of the wise and well-read Hieroglyph community. Tell us why our omission of Zelazny is so egregious or why we need to get our hands on everything by Octavia Butler right now. It’s a fun discussion with fellow science fiction readers, and you’ll no doubt emerge with more suggestions to add to your “to-read” stack. Plus, everyone who contributes to our growing list will be entered into a drawing to win a copy of the Project Hieroglyph anthology just for weighing in.

So what are you waiting for? Sign up, or just head over to the forums, and start sharing today!

 

Photo courtesy of Wonderlane, used under a Creative Commons license.

Author
Bob Beard is a fan of fandom. From Browncoats to Bronies, SCA members, Trekkers, Steampunks and more, Bob is passionate about understanding the performance and identity practices within various fandoms as well as creation of experiences for members of these groups to publicly advocate for themselves and their ideas. Bob is a Marine Corps veteran and double alumnus of Arizona State University, with a master's degree in Communication Studies and a bachelor's degree in Interdisciplinary Studies with a humanities emphasis.

Poetry for Robots

July 20, 2015 in Hieroglyph

We understand the world through metaphor. Our minds seek and spin patterns and connections, likenesses and equations. Biologist and anthropologist Gregory Bateson observed that metaphor is “how the whole fabric of mental interconnections holds together. Metaphor is right at the bottom of being alive.” As above, so below.

The most effective and explicit specimens of metaphor are found in poetry. Weaving metaphors into poems is an age-old and far-flung human act: we see and search the world with a poetic mind.

Why, then, do we search a simple online image bank with such literal terms? Because the robots haven’t been taught our poetry. What if we used poetry and metaphor as metadata? Would a search for “eyes” return images of stars? Will we learn that Jorge Luis Borges was right and metaphors present patterns of their own?

In 1989, scholar Norman Cousins published a piece called “The Poet and the Computer.” Anticipating the computer revolution at his doorstep, Cousins makes a plea: do not allow our machines to dehumanize us. And he offers a specific prescription against the potential malady: poetry.

“The danger,” he explains, is “not so much that man will be controlled by the computer as that he may imitate it.” Intimate and repeated communication with the robots may require us to conform our minds to their limited logics and cold calculations. To preserve and reinforce humanness, Cousins hypothesizes that “…it might be fruitful to effect some sort of junction between the computer technologist and the poet.”

A Poetry for Robots image, waiting for your poetic caption!

A Poetry for Robots image, waiting for your poetic caption!

At poetry4robots.com, we’re testing that junction. This “digital humanities experiment” is being conducted by Neologic Labs, Webvisions, and Arizona State University’s Center for Science and the Imagination. The concept has wide traction; Poetry for Robots has been written about by such outlets as The Guardian, Vice, and The Poetry Foundation.

And we need data!  Please navigate to poetry4robots.com and pen a few lines for the robot!

Author
Corey Pressman is the Director of Experience Strategy at Neologic, an agency and innovation lab that envisions, designs, and builds sustainable sites, apps, and other digital experiences. Corey taught anthropology at the college level for twelve years; he regularly publishes and delivers presentations on a variety of topics including the future of storytelling, interaction design, and global mobile initiatives.

Where the Holograms At?

July 16, 2015 in Hieroglyph

Microsoft recently unveiled a new demo for their HoloLens and my inner child did a little backflip. I remember playing arcade games like Sega’s Time Traveler or Holosseum as a kid and thinking that I was witnessing the future of entertainment. In my mind, a working Holodeck was right around the corner. Then, almost as quickly as their popularity had soared, the games slid back into obscurity. While they were cool for the time, the technical limitations of the day prevented them from becoming anything more than a passing fad, a brief glimpse into the possibilities of tomorrow’s entertainment.

Fast-forward twenty years, and video games have become a multibillion dollar industry. I’ve seen the remarkable things that computers can do to enhance the art of storytelling, but holograms seem to be relegated to the realm of science fiction – until now. Several companies are currently developing interactive holographic displays, including the HoloLens, MetaPro Space Glasses, CastAR, and MagicLeap. As an example, MagicLeap is a device that aims to project holographic images directly onto a person’s retina. Its creators recently released a video that shows off some of their system’s gaming capabilities.

Meanwhile, it seems that Google has learned from its failures with Google Glass, and instead of shying away from augmented reality they have doubled down. By investing over $500 million in MagicLeap and filing for numerous patents for Google Contact Lenses, they’re banking on holograms having a profound (and profitable) role in our media future. Other companies like Ostendo are developing tiny holographic projectors to fit inside smartphones. If these trends continue, the future may very well come “alive” with digital constructs.

Devices like the HoloLens don’t actually create the types of holograms I envisioned as a child, but the possibilities for this tech are promising, with possible ramifications for education, medicine, science, and design. But I’ve been waiting too long for the fun stuff: for now I want to focus on their entertainment value.

Microsoft's HoloLens technology in action

Microsoft’s HoloLens technology in action. Image courtesy of Microsoft Sweden, used under a CC BY 2.0 license (https://flic.kr/p/pWRry3).

The HoloLens demo shows a person taking their movies and placing them on any flat surface, making a big screen out of any blank wall. In the future, movies may not look like they do today. Who needs a screen when holographic actors can play out a scene in your living room, using your surroundings as props? Why just listen to music when you can have your favorite artist rock out in your bedroom? The storytelling aspects extend beyond just traditional video games and movies. Ice-Bound, a half-book, half-game, choose-your-own-adventure-type story, is an interesting concept that aims to bring books to life. The story revolves around an AI program that was created to serve as a simulacrum of a long-dead author and complete his unfinished masterpiece. The human player and the AI work together to finish the story and explore the secrets of a mysterious polar research station. To experience the Ice-Bound’s full story, players need both the physical book and an iPad or PC; the story can’t be completed without using both devices.

Another cool project, which aims to ensure you never get a good night’s sleep again, is the video game Night Terrors, which uses augmented reality to turn your own home into the scene of a hellish horror movie. After you walk around your house using the camera on your phone to map the environment, the game inserts an assortment of zombies, ghosts, and demons to stalk you from room to room, as you try to save a young girl trapped by demons somewhere within your home. The creators of the game set out “to create the scariest game ever made,” but Night Terrors plays a lot like an interactive movie where the player becomes the protagonist. The game is still currently under development and seeking funding on Indiegogo, but if this is the near future of augmented reality storytelling, then I’m all in.

Last year for April Fools’ Day Google pranked thousands of people when they announced the release of a (fake) game, which was supposed to provide the winner with a (ultimately fake) job opportunity at Google. The position they were hiring for was Pokémon Master, and the prerequisite for the job was to use the Google Maps app to embark on a globe-spanning trek to track down and catch 150 Pokémon hidden around the world. While this was just an elaborate prank, it provides another glimpse of how augmented reality can create stories, and even adventures, from our everyday surroundings. I think that the ultimate goal of any form of storytelling is to immerse the reader (or viewer, player, listener, etc.) in the world of the narrative: to have him or her relate to the emotions and concerns of the characters, and take an active interest in the events that transpire.

An exciting idea I’ve seen crop outside of storytelling (or perhaps in conjunction with it) is the AI companions that permeate the AR landscape (digiscape?). In demo videos these companions are cute, cartoon-like avatars that liven up a virtual space, but in the future they could be so much more. When AI gets more sophisticated, these avatars might become a much more prominent part of our lives. When they are capable of learning and adapting to our needs, monitoring our routines, and anticipating our behavior, it’s not hard to imagine growing attached to them, perhaps even forming a friendship. I know I would have to program my companion with a dark sense of humor and an eye for mischief, and maybe a penchant for making obscure pop culture references. It would definitely have to know where the best tacos in town could be found and it might have to indulge me in impromptu rap battles every now and then. A person’s companion might be with them at all times, working silently in the background unless it’s summoned. Depending on their programming, AI might give each digital persona a “life” of its own, allowing it to interact with other people and their avatars autonomously.

It’s hard to tell where this technology will take us, or if it will even catch on. Right now, the holographic lenses that are available are still in the thousand dollar range and outside the average consumer’s budget. If these technologies are expected to have mass appeal, that price tag will have to come down. Another thing is the “cool” factor; speaking from experience, while I do value function over form, when it comes to wearables I cannot completely discount the latter. Nobody wants to wear a device that elicits a negative response (like how Google Glass users became referred to as “Glassholes”). I also value mobility and versatility so I am reluctant to wear a big, clunky piece of technology on my head. Whether this technology catches on with the broad public will determine if we’re on the cusp of a holo-revolution, but with the rate at which this tech is advancing, I think that when the holograms do arrive en masse, they’ll be here to stay.

 

Author
Part-time writer. Full-time nerd. Science geek and pop culture enthusiast. Graduated from the University of Arizona with bachelor's degrees in English and Political Science. Currently working on my master's degree in Foresight from the University of Houston.