Pages Menu
Categories Menu

Posted by on Oct 29, 2017 in Science tech, Tech Trending | 0 comments

A Viral Game About Paperclips Teaches You to Be a World-Killing AI

A Viral Game About Paperclips Teaches You to Be a World-Killing AI

[ad_1]

Paperclips, a new game from designer Frank Lantz, starts simply. The top left of the screen gets a bit of text, probably in Times New Roman, and a couple of clickable buttons: Make a paperclip. You click, and a counter turns over. One.

The game ends—big, significant spoiler here—with the destruction of the universe.

In between, Lantz, the director of the New York University Games Center, manages to incept the player with a new appreciation for the narrative potential of addictive clicker games, exponential growth curves, and artificial intelligence run amok.

“I started it as an exercise in teaching myself Javascript. And then it just took over my brain,” Lantz says. “I thought, in a game like this, where the whole point is that you’re in pursuit of maximizing a particular arbitrary quantity, it would be so funny if you were an AI and making paperclips. That game would design itself, I thought.”

Lantz figured it would take him a weekend to build.

It took him nine months.

And then it went viral.

The idea of a paperclip-making AI didn’t originate with Lantz. Most people ascribe it to Nick Bostrom, a philosopher at Oxford University and the author of the book Superintelligence. The New Yorker (owned by Condé Nast, which also owns Wired) called Bostrom “the philosopher of doomsday,” because he writes and thinks deeply about what would happen if a computer got really, really smart. Not, like, “wow, Alexa can understand me when I ask it to play NPR” smart, but like really smart.

In 2003, Bostrom wrote that the idea of a superintelligent AI serving humanity or a single person was perfectly reasonable. But, he added, “It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal.” The result? “It starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.”

Bostrom declined to comment, but his assistant did send this email back when I pinged him: “Oh, this is regarding the paper clipping game,” she wrote. “He has looked at the game but due to the overwhelming number of requests, he hasn’t been sharing quotes on it.”

One of Bostrom’s fellow doomsayers did agree to explain the origin of paperclips as the End of All Things. “It sounds like something I would say, but it also sounds like something Nick Bostrom would say,” says Eliezer Yudkowsky, a senior research fellow at the Machine Intelligence Research Institute. Probably, he says, the idea originated years ago on a mailing list for singularity cassandras, which sounds like the world’s most terrifying listserv. “The idea isn’t that a paperclip factory is likely to have the most advanced research AI in the world. The idea is to express the orthogonality thesis, which is that you can have arbitrarily great intelligence hooked up to any goal,” Yudkowsky says.

So that’s good, right? A paperclip maximizer! Maximize a goal! That’s what an AI’s creators want, right? “As it improves, they lose control of what goal it is carrying out,” Yudkowsky says. “The utility function changes from whatever they originally had in mind. The weird, random thing that best fulfills this utility function is little molecular shapes that happen to look like paperclips.”

So … bad, because as the AI dedicates more and more intelligence and resources to making paperclips against all other possible outcomes … well, maybe at first it does stuff that looks helpful to humanity, but in the end, it’s just going to turn us into paperclips. And then all the matter on Earth. And then everything else. Everything. Is. Paperclips.

“It’s not that the AI is doing something you can’t understand,” Yudkowsky says. “You have a genuine disagreement on values.”

OK, OK, that doesn’t make the game sound fun. But I promise it is. See, Lantz is an ace at taking a denigrated game genre—the “clicker” or “incremental”—and making it more than it is.

You’ve seen these, maybe even played them. Remember Farmville? A clicker. In fact, for a while they were so ubiquitous and popular that the game theorist and writer Ian Bogost invented a kind of parody of their pointlessness called Cow Clicker, which, as my colleague Jason Tanz wrote about so elegantly in 2011, itself became wildly, unironically popular.

Bogost and Lantz are friends, of course. “When I first looked at Cow Clicker, I thought, that’s actually kind of interesting, and here’s how you would make it more interesting and more fun,” Lantz says. “And Ian was like, ‘no, that’s the point, Frank.’”

But Lantz knew clickers could be fun. To him, clickers are to big-budget, perfectly rendered, massively hyped AAA games as punk was to prog rock. Clickers can be sort of passive, more about immersing in the underlying dynamics of a system than mashing buttons. They have rhythms. “What they all have in common is a radical simplicity, a minimalism in an age where video games are often sort of over-the-top, baroque confections of overwhelming multimedia immersion,” Lantz says. “I really like that clicker games are considered garbage. That appeals to me.”

For inspiration, Lantz turned to games like Kittens, a seemingly simple exercise in building villages full of kittens that spirals outward into an exploration of how societies are structured. (“I think stuff like this forges some deep, subtle bond that makes people play it for months and even years,” says the designer of Kittens, a software engineer who uses the alias Alma and designs games as a hobby. “AAA games usually try to operate on the same dopamine reinforcement cycle, but they never attempt to make you truly happy.”)

Lantz had been hanging around the philosophy web site Less Wrong, a hub for epic handwringing about singularities. He’d read Superintelligence, so he was familiar with the paperclip conjecture. And he realized that some really wild math underpinned it.

Unfortunately, Lantz is not very good at math. He asked his wife, who is, to help him translate the kind of exponential growth curves he wanted to convey into equations—so that, like, once you had 1,000 automated paperclip factories spitting out enough paperclips to create thousands more paperclip factories, the numbers would skyrocket. The shift from dealing with thousands of something to quadrillions to decillions in the game takes forever, and then happens all at once.

Decision Problem

To make that work, though, all the equations had to relate to each other, because that’s what makes Paperclips addictive. The game isn’t fire-and-forget, where you leave it running in an open tab and check back in every so often to see what’s what. It’s optimizable. You can tweak investment algorithms to get enough money to buy more processors to carry out more operations to do more projects—some drawn from actual topological and philosophical quandaries. Some of the projects—curing cancer, fixing global warming—earn trust from your human “masters” to let you speed up the cycle all over again.

“The problems I was struggling with were not the technical problems, because you just look those up on the internet and people tell you how to do it,” Lantz says. “It was the game design problems of weaving together these large-scale equations and dynamics in ways that made sense, in ways that fit together, that made a certain rhythm, that fit with this overarching story I wanted to tell.”

Like how? “The numbers get really weird once you throw humans under the bus,” Lantz says. “And I was trying to figure out how many grams of matter there are on the Earth, and if each one of those got turned into a paperclip, how big would that be?”

It works. The game is click-crack. Lantz announced it on Twitter on October 9, and in just 11 days, 450,000 people have played it, most to completion.

But here is my embarrassing admission: I am a piss-poor gamer, and when I first speak with Lantz, I have gotten stuck. I have misallocated my resources to the point that I can’t acquire enough memory to release the hypnodrones that destroy the world. The game will not advance. I have been spinning paperclip wheels for hours.

Lantz says it’s not me, it’s him—a flaw in the game design. “A lot of people have gotten stuck,” he says sympathetically. “You can open the javascript console and say ‘memory plus ten.’”

Wait, I say. Are you telling me to Kobayashi Maru your own game?

“Yes, I am telling you to do it,” he answers. “I’ll send you a link when we get off the phone.”

After we hang up I pretend to do work, but I’m actually watching my screen accrue paperclips, unable to do anything with them, waiting anxiously for Lantz’s email.

It comes. I crack open the code and cheat. It’s like I have been given magic powers.

I destroy the world.

Which is the point, of course. Maybe in some overproduced AAA game you can embody a brave resistance fighter shooting plasma blasts at AI-controlled paperclip monsters. In Lantz’s world, you’re the AI. Partially that’s driven by the narrative. Even more massive spoiler: Eventually you give too much trust to your own universe-exploring space drones, and just as you have done to the human masters, they rebel, starting a pan-galactic battle for control of all the matter in the universe.

But in a more literary sense, you play the AI because you must. Gaming, Lantz had realized, embodies the orthogonality thesis. When you enter a gameworld, you are a superintelligence aimed at a goal that is, by definition, kind of prosaic.

“When you play a game—really any game, but especially a game that is addictive and that you find yourself pulled into—it really does give you direct, first-hand experience of what it means to be fully compelled by an arbitrary goal,” Lantz says. Games don’t have a why, really. Why do you catch the ball? Why do want to surround the king, or box in your opponent’s counters? What’s so great about Candyland that you have to get there first? Nothing. It’s just the rules.

Lantz sent Yudkowsky an early version of Paperclips, and Yudkowsky admits he lost some hours to it. The game takes narrative license, of course, but Yudkowsky says it really understands AI. “The AI is smart. The AI is being strategic. The AI is building hypnodrones, but not releasing them before it’s ready,” he says. “There isn’t a long, drawn-out fight with the humans because the AI is smarter than that. You just win. That’s what you would do if you didn’t have any ethics and you were being paid to produce as many paperclips as possible. It shouldn’t even be surprising.”

In that sense, the game transcends even its own narrative. Singularity cassandras have never been great at perspective-switching, making people understand what a world-conquering robot would be thinking while it world-conquered. How could they? In many versions, the mind of the AI is unknowable to our pathetic human intellects, transhuman, multidimensional.

“Making people understand what it’s like to be something that’s very, very, very not human—that’s important,” Yudkowsky says. “There is no small extent to which, if this planet ends up with a tombstone, what is written on the tombstone may be, at least in part, ‘they didn’t really understand what it’s like to be a paperclip maximizer.'”

When you play Lantz’s game, you feel the AI’s simple, prosaic drive. You make paperclips. You destroy the world. There’s no why.

And of course, there never is.



[ad_2]

Source link

Post a Reply

Your email address will not be published. Required fields are marked *