Email This List Email This List Print This List Print This List

The AI Revolu­tion: The Road to Super­in­tel­li­gence

By Tim Urb­an   (Please note this, is not my art­icle, I just loved it and so wanted to always have it avail­able).

Note: The reas­on this post took three weeks to fin­ish is that as I dug into research on Arti­fi­cial Intel­li­gence, I could not believe what I was read­ing. It hit me pretty quickly that what’s hap­pen­ing in the world of AI is not just an import­ant top­ic, but by far THE most import­ant top­ic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situ­ation and why it mat­ters so much. Not shock­ingly, that became out­rageously long, so I broke it into two parts. This is Part 1—Part 2 is here.

_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​

We are on the edge of change com­par­able to the rise of human life on Earth. — Ver­nor Vinge

 

What does it feel like to stand here?

Edge1

It seems like a pretty intense place to be standing—but then you have to remem­ber some­thing about what it’s like to stand on a time graph: you can’t see what’s to your right. So here’s how it actu­ally feels to stand there:

Edge

Which prob­ably feels pretty nor­mal…

_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​

The Far Future—Coming Soon

Ima­gine tak­ing a time machine back to 1750—a time when the world was in a per­man­ent power out­age, long-dis­tance com­mu­nic­a­tion meant either yelling loudly or fir­ing a can­non in the air, and all trans­port­a­tion ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to under­stand what it would be like for him to see shiny cap­sules racing by on a high­way, talk to people who had been on the oth­er side of the ocean earli­er in the day, watch sports that were being played 1,000 miles away, hear a music­al per­form­ance that happened 50 years ago, and play with my magic­al wiz­ard rect­angle that he could use to cap­ture a real-life image or record a liv­ing moment, gen­er­ate a map with a paranor­mal mov­ing blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the oth­er side of the coun­try, and worlds of oth­er incon­ceiv­able sor­cery. This is all before you show him the inter­net or explain things like the Inter­na­tion­al Space Sta­tion, the Large Had­ron Col­lider, nuc­le­ar weapons, or gen­er­al relativ­ity.

This exper­i­ence for him wouldn’t be sur­pris­ing or shock­ing or even mind-blowing—those words aren’t big enough. He might actu­ally die.

But here’s the inter­est­ing thing—if he then went back to 1750 and got jeal­ous that we got to see his reac­tion and decided he wanted to try the same thing, he’d take the time machine and go back the same dis­tance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of things—but he wouldn’t die. It would be far less of an insane exper­i­ence for him, because while 1500 and 1750 were very dif­fer­ent, they were much lessdif­fer­ent than 1750 to 2015. The 1500 guy would learn some mind-bend­ing shit about space and phys­ics, he’d be impressed with how com­mit­ted Europe turned out to be with that new imper­i­al­ism fad, and he’d have to do some major revi­sions of his world map con­cep­tion. But watch­ing every­day life go by in 1750—transportation, com­mu­nic­a­tion, etc.—definitely wouldn’t make him die.

No, in order for the 1750 guy to have as much fun as we had with him, he’d have to go much farther back—maybe all the way back to about 12,000 BC, before the First Agri­cul­tur­al Revolu­tion gave rise to the first cit­ies and to the concept of civil­iz­a­tion. If someone from a purely hunter-gather­er world—from a time when humans were, more or less, just anoth­er anim­al species—saw the vast human empires of 1750 with their tower­ing churches, their ocean-cross­ing ships, their concept of being “inside,” and their enorm­ous moun­tain of col­lect­ive, accu­mu­lated human know­ledge and discovery—he’d likely die.

And then what if, after dying, he got jeal­ous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, he’d show the guy everything and the guy would be like, “Okay what’s your point who cares.” For the 12,000 BC guy to have the same fun, he’d have to go back over 100,000 years and get someone he could show fire and lan­guage to for the first time.

In order for someone to be trans­por­ted into the future and die from the level of shock they’d exper­i­ence, they have to go enough years ahead that a “die level of pro­gress,” or a Die Pro­gress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gather­er times, but at the post-Agri­cul­tur­al Revolu­tion rate, it only took about 12,000 years. The post-Indus­tri­al Revolu­tion world has moved so quickly that a 1750 per­son only needs to go for­ward a couple hun­dred years for a DPU to have happened.

This pattern—human pro­gress mov­ing quick­er and quick­er as time goes on—is what futur­ist Ray Kur­z­weil calls human history’s Law of Accel­er­at­ing Returns. This hap­pens because more advanced soci­et­ies have the abil­ity to pro­gress at a faster rate than less advanced soci­et­ies—because they’re more advanced. 19th cen­tury human­ity knew more and had bet­ter tech­no­logy than 15th cen­tury human­ity, so it’s no sur­prise that human­ity made far more advances in the 19th cen­tury than in the 15th century—15th cen­tury human­ity was no match for 19th cen­tury human­ity.11← open these

This works on smal­ler scales too. The movie Back to the Future came out in 1985, and “the past” took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the new­ness of TVs, the prices of soda, the lack of love for shrill elec­tric gui­tar, and the vari­ation in slang. It was a dif­fer­ent world, yes—but if the movie were made today and the past took place in 1985, the movie could have had much more fun with much big­ger dif­fer­ences. The char­ac­ter would be in a time before per­son­al com­puters, inter­net, or cell phones—today’s Marty McFly, a teen­ager born in the late 90s, would be much more out of place in 1985 than the movie’s Marty McFly was in 1955.

This is for the same reas­on we just discussed—the Law of Accel­er­at­ing Returns. The aver­age rate of advance­ment between 1985 and 2015 was high­er than the rate between 1955 and 1985—because the former was a more advanced world—so much more change happened in the most recent 30 years than in the pri­or 30.

So—advances are get­ting big­ger and big­ger and hap­pen­ing more and more quickly. This sug­gests some pretty intense things about our future, right?

Kur­z­weil sug­gests that the pro­gress of the entire 20th cen­tury would have been achieved in only 20 years at the rate of advance­ment in the year 2000—in oth­er words, by 2000, the rate of pro­gress was five times faster than the aver­age rate of pro­gress dur­ing the 20th cen­tury. He believes anoth­er 20th century’s worth of pro­gress happened between 2000 and 2014 and that anoth­er 20th century’s worth of pro­gress will hap­pen by 2021, in only sev­en years. A couple dec­ades later, he believes a 20th century’s worth of pro­gress will hap­pen mul­tiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accel­er­at­ing Returns, Kur­z­weil believes that the 21st cen­tury will achieve 1,000 times the pro­gress of the 20th cen­tury.2

If Kur­z­weil and oth­ers who agree with him are cor­rect, then we may be as blown away by 2030 as our 1750 guy was by 2015—i.e. the next DPU might only take a couple decades—and the world in 2050 might be so vastly dif­fer­ent than today’s world that we would barely recog­nize it.

This isn’t sci­ence fic­tion. It’s what many sci­ent­ists smarter and more know­ledge­able than you or I firmly believe—and if you look at his­tory, it’s what we should logic­ally pre­dict.

So then why, when you hear me say some­thing like “the world 35 years from now might be totally unre­cog­niz­able,” are you think­ing, “Cool….but nah­h­h­h­h­hh”? Three reas­ons we’re skep­tic­al of out­land­ish fore­casts of the future:

1) When it comes to his­tory, we think in straight lines. When we ima­gine the pro­gress of the next 30 years, we look back to the pro­gress of the pre­vi­ous 30 as an indic­at­or of how much will likely hap­pen. When we think about the extent to which the world will change in the 21st cen­tury, we just take the 20th cen­tury pro­gress and add it to the year 2000. This was the same mis­take our 1750 guy made when he got someone from 1500 and expec­ted to blow his mind as much as his own was blown going the same dis­tance ahead. It’s most intu­it­ive for us to think lin­early, when we should be think­ingexpo­nen­tially. If someone is being more clev­er about it, they might pre­dict the advances of the next 30 years not by look­ing at the pre­vi­ous 30 years, but by tak­ing the cur­rent rate of pro­gress and judging based on that. They’d be more accur­ate, but still way off. In order to think about the future cor­rectly, you need to ima­gine things mov­ing at a much faster rate than they’re mov­ing now.

Projections

2) The tra­ject­ory of very recent his­tory often tells a dis­tor­ted story. First, even a steep expo­nen­tial curve seems lin­ear when you only look at a tiny slice of it, the same way if you look at a little seg­ment of a huge circle up close, it looks almost like a straight line. Second, expo­nen­tial growth isn’t totally smooth and uni­form. Kur­z­weil explains that pro­gress hap­pens in “S‑curves”:

S-Curves

An S is cre­ated by the wave of pro­gress when a new paradigm sweeps the world. The curve goes through three phases:

1. Slow growth (the early phase of expo­nen­tial growth)
2. Rap­id growth (the late, explos­ive phase of expo­nen­tial growth)
3. A lev­el­ing off as the par­tic­u­lar paradigm matures3

If you look only at very recent his­tory, the part of the S‑curve you’re on at the moment can obscure your per­cep­tion of how fast things are advan­cing. The chunk of time between 1995 and 2007 saw the explo­sion of the inter­net, the intro­duc­tion of Microsoft, Google, and Face­book into the pub­lic con­scious­ness, the birth of social net­work­ing, and the intro­duc­tion of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less ground­break­ing, at least on the tech­no­lo­gic­al front. Someone think­ing about the future today might exam­ine the last few years to gauge the cur­rent rate of advance­ment, but that’s miss­ing the big­ger pic­ture. In fact, a new, huge Phase 2 growth spurt might be brew­ing right now.

3) Our own exper­i­ence makes us stub­born old men about the future. We base our ideas about the world on our per­son­al exper­i­ence, and that exper­i­ence has ingrained the rate of growth of the recent past in our heads as “the way things hap­pen.” We’re also lim­ited by our ima­gin­a­tion, which takes our exper­i­ence and uses it to con­jure future predictions—but often, what we know simply doesn’t give us the tools to think accur­ately about the future.2 When we hear a pre­dic­tion about the future that con­tra­dicts our exper­i­ence-based notion of how things work, our instinct is that the pre­dic­tion must be naïve. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, “That’s stupid—if there’s one thing I know from his­tory, it’s that every­body dies.” And yes, no one in the past has not died. But no one flew air­planes before air­planes were inven­ted either.

So while nah­h­h­hh might feel right as you read this post, it’s prob­ably actu­ally wrong. The fact is, if we’re being truly logic­al and expect­ing his­tor­ic­al pat­terns to con­tin­ue, we should con­clude that much, much, much more should change in the com­ing dec­ades than we intu­it­ively expect. Logic also sug­gests that if the most advanced spe­cies on a plan­et keeps mak­ing lar­ger and lar­ger leaps for­ward at an ever-faster rate, at some point, they’ll make a leap so great that it com­pletely alters life as they know it and the per­cep­tion they have of what it means to be a human—kind of like how evol­u­tion kept mak­ing great leaps toward intel­li­gence until finally it made such a large leap to the human being that it com­pletely altered what it meant for any creature to live on plan­et Earth. And if you spend some time read­ing about what’s going on today in sci­ence and tech­no­logy, you start to see a lot of signs quietly hint­ing that life as we cur­rently know it can­not with­stand the leap that’s com­ing next.

_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​

The Road to Super­in­tel­li­gence

What Is AI?

If you’re like me, you used to think Arti­fi­cial Intel­li­gence was a silly sci-fi concept, but lately you’ve been hear­ing it men­tioned by ser­i­ous people, and you don’t really quite get it.

There are three reas­ons a lot of people are con­fused about the term AI:

1) We asso­ci­ate AI with movies. Star Wars. Ter­min­at­or. 2001: A Space Odys­sey. Even the Jet­sons. And those are fic­tion, as are the robot char­ac­ters. So it makes AI sound a little fic­tion­al to us.

2) AI is a broad top­ic. It ranges from your phone’s cal­cu­lat­or to self-driv­ing cars to some­thing in the future that might change the world dra­mat­ic­ally. AI refers to all of these things, which is con­fus­ing.

3) We use AI all the time in our daily lives, but we often don’t real­ize it’s AI. John McCarthy, who coined the term “Arti­fi­cial Intel­li­gence” in 1956, com­plained that “as soon as it works, no one calls it AI any­more.”4 Because of this phe­nomen­on, AI often sounds like a myth­ic­al future pre­dic­tion more than a real­ity. At the same time, it makes it sound like a pop concept from the past that nev­er came to fruition. Ray Kur­z­weil says he hears people say that AI withered in the 1980s, which he com­pares to “insist­ing that the Inter­net died in the dot-com bust of the early 2000s.”5

So let’s clear things up. First, stop think­ing of robots. A robot is a con­tain­er for AI, some­times mim­ick­ing the human form, some­times not—but the AI itself is the com­puter inside the robot. AI is the brain, and the robot is its body—if it even has a body. For example, the soft­ware and data behind Siri is AI, the woman’s voice we hear is a per­son­i­fic­a­tion of that AI, and there’s no robot involved at all.

Secondly, you’ve prob­ably heard the term “sin­gu­lar­ity” or “tech­no­lo­gic­al sin­gu­lar­ity.” This term has been used in math to describe an asymp­tote-like situ­ation where nor­mal rules no longer apply. It’s been used in phys­ics to describe a phe­nomen­on like an infin­itely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situ­ations where the usu­al rules don’t apply. In 1993, Ver­nor Vinge wrote a fam­ous essay in which he applied the term to the moment in the future when our technology’s intel­li­gence exceeds our own—a moment for him when life as we know it will be forever changed and nor­mal rules will no longer apply. Ray Kur­z­weil then muddled things a bit by defin­ing the sin­gu­lar­ity as the time when the Law of Accel­er­at­ing Returns has reached such an extreme pace that tech­no­lo­gic­al pro­gress is hap­pen­ing at a seem­ingly-infin­ite pace, and after which we’ll be liv­ing in a whole new world. I found that many of today’s AI thinkers have stopped using the term, and it’s con­fus­ing any­way, so I won’t use it much here (even though we’ll be focus­ing on that ideathrough­out).

Finally, while there are many dif­fer­ent types or forms of AI since AI is a broad concept, the crit­ic­al cat­egor­ies we need to think about are based on an AI’s caliber. There are three major AI caliber cat­egor­ies:

AI Caliber 1) Arti­fi­cial Nar­row Intel­li­gence (ANI): Some­times referred to as Weak AI, Arti­fi­cial Nar­row Intel­li­gence is AI that spe­cial­izes in one area. There’s AI that can beat the world chess cham­pi­on in chess, but that’s the only thing it does. Ask it to fig­ure out a bet­ter way to store data on a hard drive, and it’ll look at you blankly.

AI Caliber 2) Arti­fi­cial Gen­er­al Intel­li­gence (AGI): Some­times referred to as Strong AI, or Human-Level AI, Arti­fi­cial Gen­er­al Intel­li­gence refers to a com­puter that is as smart as a human across the board—a machine that can per­form any intel­lec­tu­al task that a human being can. Cre­at­ing AGI is amuch harder task than cre­at­ing ANI, and we’re yet to do it. Pro­fess­or Linda Gottfred­son describes intel­li­gence as “a very gen­er­al men­tal cap­ab­il­ity that, among oth­er things, involves the abil­ity to reas­on, plan, solve prob­lems, think abstractly, com­pre­hend com­plex ideas, learn quickly, and learn from exper­i­ence.” AGI would be able to do all of those things as eas­ily as you can.

AI Caliber 3) Arti­fi­cial Super­in­tel­li­gence (ASI): Oxford philo­soph­er and lead­ing AI thinker Nick Bostrom defines super­in­tel­li­gence as “an intel­lect that is much smarter than the best human brains in prac­tic­ally every field, includ­ing sci­entif­ic cre­ativ­ity, gen­er­al wis­dom and social skills.” Arti­fi­cial Super­in­tel­li­gence ranges from a com­puter that’s just a little smarter than a human to one that’s tril­lions of times smarter—across the board. ASI is the reas­on the top­ic of AI is such a spicy meat­ball and why the words immor­tal­ity and extinc­tion will both appear in these posts mul­tiple times.

As of now, humans have conquered the low­est caliber of AIANI—in many ways, and it’s every­where. The AI Revolu­tion is the road from ANI, through AGI, to ASI—a road we may or may not sur­vive but that, either way, will change everything.

Let’s take a close look at what the lead­ing thinkers in the field believe this road looks like and why this revolu­tion might hap­pen way soon­er than you might think:

Where We Are Currently—A World Run­ning on ANI

Arti­fi­cial Nar­row Intel­li­gence is machine intel­li­gence that equals or exceeds human intel­li­gence or effi­ciency at a spe­cif­ic thing. A few examples:

  • Cars are full of ANI sys­tems, from the com­puter that fig­ures out when the anti-lock brakes should kick in to the com­puter that tunes the para­met­ers of the fuel injec­tion sys­tems. Google’s self-driv­ing car, which is being tested now, will con­tain robust ANI sys­tems that allow it to per­ceive and react to the world around it.
  • Your phone is a little ANI fact­ory. When you nav­ig­ate using your map app, receive tailored music recom­mend­a­tions from Pan­dora, check tomorrow’s weath­er, talk to Siri, or dozens of oth­er every­day activ­it­ies, you’re using ANI.
  • Your email spam fil­ter is a clas­sic type of ANI—it starts off loaded with intel­li­gence about how to fig­ure out what’s spam and what’s not, and then it learns and tail­ors its intel­li­gence to you as it gets exper­i­ence with your par­tic­u­lar pref­er­ences. The Nest Ther­mo­stat does the same thing as it starts to fig­ure out your typ­ic­al routine and act accord­ingly.
  • You know the whole creepy thing that goes on when you search for a product on Amazon and then you see that as a “recom­men­ded for you” product on a dif­fer­ent site, or when Face­book some­how knows who it makes sense for you to add as a friend? That’s a net­work of ANI sys­tems, work­ing togeth­er to inform each oth­er about who you are and what you like and then using that inform­a­tion to decide what to show you. Same goes for Amazon’s “People who bought this also bought…” thing—that’s an ANI sys­tem whose job it is to gath­er info from the beha­vi­or of mil­lions of cus­tom­ers and syn­thes­ize that info to clev­erly upsell you so you’ll buy more things.
  • Google Trans­late is anoth­er clas­sic ANI system—impressively good at one nar­row task. Voice recog­ni­tion is anoth­er, and there are a bunch of apps that use those two ANIs as a tag team, allow­ing you to speak a sen­tence in one lan­guage and have the phone spit out the same sen­tence in anoth­er.
  • When your plane lands, it’s not a human that decides which gate it should go to. Just like it’s not a human that determ­ined the price of your tick­et.
  • The world’s best Check­ers, Chess, Scrabble, Back­gam­mon, and Oth­ello play­ers are now all ANI sys­tems.
  • Google search is one large ANI brain with incred­ibly soph­ist­ic­ated meth­ods for rank­ing pages and fig­ur­ing out what to show you in par­tic­u­lar. Same goes for Facebook’s News­feed.
  • And those are just in the con­sumer world. Soph­ist­ic­ated ANI sys­tems are widely used in sec­tors and indus­tries like mil­it­ary, man­u­fac­tur­ing, and fin­ance (algorithmic high-fre­quency AI traders account for more than half of equity shares traded on US mar­kets6), and in expert sys­tems like those that help doc­tors make dia­gnoses and, most fam­ously, IBM’s Wat­son, who con­tained enough facts and under­stood coy Tre­bek-speak well enough to soundly beat the most pro­lif­icJeop­ardy cham­pi­ons.

ANI sys­tems as they are now aren’t espe­cially scary. At worst, a glitchy or badly-pro­grammed ANI can cause an isol­ated cata­strophe like knock­ing out a power grid, caus­ing a harm­ful nuc­le­ar power plant mal­func­tion, or trig­ger­ing a fin­an­cial mar­kets dis­aster (like the 2010 Flash Crash when an ANI pro­gram reacted the wrong way to an unex­pec­ted situ­ation and caused the stock mar­ket to briefly plum­met, tak­ing $1 tril­lion of mar­ket value with it, only part of which was recovered when the mis­take was cor­rec­ted).

But while ANI doesn’t have the cap­ab­il­ity to cause an exist­en­tial threat, we should see this increas­ingly large and com­plex eco­sys­tem of rel­at­ively-harm­less ANI as a pre­curs­or of the world-alter­ing hur­ricane that’s on the way. Each new ANI innov­a­tion quietly adds anoth­er brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI sys­tems “are like the amino acids in the early Earth’s prim­or­di­al ooze”—the inan­im­ate stuff of life that, one unex­pec­ted day, woke up.

The Road From ANI to AGI

Why It’s So Hard

Noth­ing will make you appre­ci­ate human intel­li­gence like learn­ing about how unbe­liev­ably chal­len­ging it is to try to cre­ate a com­puter as smart as we are. Build­ing sky­scrapers, put­ting humans in space, fig­ur­ing out the details of how the Big Bang went down—all far easi­er than under­stand­ing our own brain or how to make some­thing as cool as it. As of now, the human brain is the most com­plex object in the known uni­verse.

What’s inter­est­ing is that the hard parts of try­ing to build AGI (a com­puter as smart as humans ingen­er­al, not just at one nar­row spe­cialty) are not intu­it­ively what you’d think they are. Build a com­puter that can mul­tiply two ten-digit num­bers in a split second—incredibly easy. Build one that can look at a dog and answer wheth­er it’s a dog or a cat—spectacularly dif­fi­cult. Make AI that can beat any human in chess? Done. Make one that can read a para­graph from a six-year-old’s pic­ture book and not just recog­nize the words but under­stand the mean­ing of them? Google is cur­rently spend­ing bil­lions of dol­lars try­ing to do it. Hard things—like cal­cu­lus, fin­an­cial mar­ket strategy, and lan­guage translation—are mind-numb­ingly easy for a com­puter, while easy things—like vis­ion, motion, move­ment, and perception—are insanely hard for it. Or, as com­puter sci­ent­ist Don­ald Knuth puts it, “AI has by now suc­ceeded in doing essen­tially everything that requires ‘think­ing’ but has failed to do most of what people and anim­als do ‘without think­ing.’”7

What you quickly real­ize when you think about this is that those things that seem easy to us are actu­ally unbe­liev­ably com­plic­ated, and they only seem easy because those skills have been optim­ized in us (and most anim­als) by hun­dreds of mil­lion years of anim­al evol­u­tion. When you reach your hand up toward an object, the muscles, ten­dons, and bones in your shoulder, elbow, and wrist instantly per­form a long series of phys­ics oper­a­tions, in con­junc­tion with your eyes, to allow you to move your hand in a straight line through three dimen­sions. It seems effort­less to you because you have per­fec­ted soft­ware in your brain for doing it. Same idea goes for why it’s not that mal­ware is dumb for not being able to fig­ure out the slanty word recog­ni­tion test when you sign up for a new account on a site—it’s that your brain is super impress­ive for being able to.

On the oth­er hand, mul­tiply­ing big num­bers or play­ing chess are new activ­it­ies for bio­lo­gic­al creatures and we haven’t had any time to evolve a pro­fi­ciency at them, so a com­puter doesn’t need to work too hard to beat us. Think about it—which would you rather do, build a pro­gram that could mul­tiply big num­bers or one that could under­stand the essence of a B well enough that you could show it a B in any one of thou­sands of unpre­dict­able fonts or hand­writ­ing and it could instantly know it was a B?

One fun example—when you look at this, you and a com­puter both can fig­ure out that it’s a rect­angle with two dis­tinct shades, altern­at­ing:

Screen Shot 2015-01-21 at 12.59.21 AM

Tied so far. But if you pick up the black and reveal the whole image…

Screen Shot 2015-01-21 at 12.59.54 AM

…you have no prob­lem giv­ing a full descrip­tion of the vari­ous opaque and trans­lu­cent cyl­in­ders, slats, and 3‑D corners, but the com­puter would fail miser­ably. It would describe what it sees—a vari­ety of two-dimen­sion­al shapes in sev­er­al dif­fer­ent shades—which is actu­ally what’s there. Your brain is doing a ton of fancy shit to inter­pret the implied depth, shade-mix­ing, and room light­ing the pic­ture is try­ing to por­tray.8 And look­ing at the pic­ture below, a com­puter sees a two-dimen­sion­al white, black, and gray col­lage, while you eas­ily see what it really is—a photo of an entirely-black, 3‑D rock:

article-2053686-0E8BC15900000578-845_634x330

Cred­it: Mat­thew Lloyd

And everything we just men­tioned is still only tak­ing in stag­nant inform­a­tion and pro­cessing it. To be human-level intel­li­gent, a com­puter would have to under­stand things like the dif­fer­ence between subtle facial expres­sions, the dis­tinc­tion between being pleased, relieved, con­tent, sat­is­fied, and glad, and why Brave­heart was great but The Pat­ri­ot was ter­rible.

Daunt­ing.

So how do we get there?

First Key to Cre­at­ing AGI: Increas­ing Com­pu­ta­tion­al Power

One thing that def­in­itely needs to hap­pen for AGI to be a pos­sib­il­ity is an increase in the power of com­puter hard­ware. If an AI sys­tem is going to be as intel­li­gent as the brain, it’ll need to equal the brain’s raw com­put­ing capa­city.

One way to express this capa­city is in the total cal­cu­la­tions per second (cps) the brain could man­age, and you could come to this num­ber by fig­ur­ing out the max­im­um cps of each struc­ture in the brain and then adding them all togeth­er.

Ray Kur­z­weil came up with a short­cut by tak­ing someone’s pro­fes­sion­al estim­ate for the cps of one struc­ture and that structure’s weight com­pared to that of the whole brain and then mul­tiply­ing pro­por­tion­ally to get an estim­ate for the total. Sounds a little iffy, but he did this a bunch of times with vari­ous pro­fes­sion­al estim­ates of dif­fer­ent regions, and the total always arrived in the same ballpark—around 1016, or 10 quad­ril­lion cps.

Cur­rently, the world’s fast­est super­com­puter, China’s Tianhe‑2, has actu­ally beaten that num­ber, clock­ing in at about 34 quad­ril­lion cps. But Tianhe‑2 is also a dick, tak­ing up 720 square meters of space, using 24 mega­watts of power (the brain runs on just 20 watts), and cost­ing $390 mil­lion to build. Not espe­cially applic­able to wide usage, or even most com­mer­cial or indus­tri­al usage yet.

Kur­z­weil sug­gests that we think about the state of com­puters by look­ing at how many cps you can buy for $1,000. When that num­ber reaches human-level—10 quad­ril­lion cps—then that’ll mean AGI could become a very real part of life.

Moore’s Law is a his­tor­ic­ally-reli­able rule that the world’s max­im­um com­put­ing power doubles approx­im­ately every two years, mean­ing com­puter hard­ware advance­ment, like gen­er­al human advance­ment through his­tory, grows expo­nen­tially. Look­ing at how this relates to Kurzweil’s cps/$1,000 met­ric, we’re cur­rently at about 10 tril­lion cps/$1,000, right on pace with this graph’s pre­dicted tra­ject­ory:9

PPTExponentialGrowthof_Computing-1

So the world’s $1,000 com­puters are now beat­ing the mouse brain and they’re at about a thou­sandth of human level. This doesn’t sound like much until you remem­ber that we were at about a tril­lionth of human level in 1985, a bil­lionth in 1995, and a mil­lionth in 2005. Being at a thou­sandth in 2015 puts us right on pace to get to an afford­able com­puter by 2025 that rivals the power of the brain.

So on the hard­ware side, the raw power needed for AGI is tech­nic­ally avail­able now, in China, and we’ll be ready for afford­able, wide­spread AGI-caliber hard­ware with­in 10 years. But raw com­pu­ta­tion­al power alone doesn’t make a com­puter gen­er­ally intelligent—the next ques­tion is, how do we bring human-level intel­li­gence to all that power?

Second Key to Cre­at­ing AGI: Mak­ing it Smart

This is the icky part. The truth is, no one really knows how to make it smart—we’re still debat­ing how to make a com­puter human-level intel­li­gent and cap­able of know­ing what a dog and a weird-writ­ten B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most com­mon strategies I came across:

1) Pla­gi­ar­ize the brain.

This is like sci­ent­ists toil­ing over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep study­ing dili­gently, they can’t do nearly as well as that kid, and then they finally decide “k fuck it I’m just gonna copy that kid’s answers.” It makes sense—we’re stumped try­ing to build a super-com­plex com­puter, and there hap­pens to be a per­fect pro­to­type for one in each of our heads.

The sci­ence world is work­ing hard on reverse engin­eer­ing the brain to fig­ure out how evol­u­tion made such a rad thing—optim­ist­ic estim­ates say we can do this by 2030. Once we do that, we’ll know all the secrets of how the brain runs so power­fully and effi­ciently and we can draw inspir­a­tion from it and steal its innov­a­tions. One example of com­puter archi­tec­ture that mim­ics the brain is the arti­fi­cial neur­al net­work. It starts out as a net­work of tran­sist­or “neur­ons,” con­nec­ted to each oth­er with inputs and out­puts, and it knows nothing—like an infant brain. The way it “learns” is it tries to do a task, say hand­writ­ing recog­ni­tion, and at first, its neur­al fir­ings and sub­sequent guesses at deci­pher­ing each let­ter will be com­pletely ran­dom. But when it’s told it got some­thing right, the tran­sist­or con­nec­tions in the fir­ing path­ways that happened to cre­ate that answer are strengthened; when it’s told it was wrong, those path­ways’ con­nec­tions are weakened. After a lot of this tri­al and feed­back, the net­work has, by itself, formed smart neur­al path­ways and the machine has become optim­ized for the task. The brain learns a bit like this but in a more soph­ist­ic­ated way, and as we con­tin­ue to study the brain, we’re dis­cov­er­ing ingeni­ous new ways to take advant­age of neur­al cir­cuitry.

More extreme pla­gi­ar­ism involves a strategy called “whole brain emu­la­tion,” where the goal is to slice a real brain into thin lay­ers, scan each one, use soft­ware to assemble an accur­ate recon­struc­ted 3‑D mod­el, and then imple­ment the mod­el on a power­ful com­puter. We’d then have a com­puter offi­cially cap­able of everything the brain is cap­able of—it would just need to learn and gath­er inform­a­tion. If engin­eers get really good, they’d be able to emu­late a real brain with such exact accur­acy that the brain’s full per­son­al­ity and memory would be intact once the brain archi­tec­ture has been uploaded to a com­puter. If the brain belonged to Jim right before he passed away, the com­puter would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turn­ing Jim into an unima­gin­ably smart ASI, which he’d prob­ably be really excited about.

How far are we from achiev­ing whole brain emu­la­tion? Well so far, we’ve not yet just recently been able to emu­late a 1mm-long flat­worm brain, which con­sists of just 302 total neur­ons. The human brain con­tains 100 bil­lion. If that makes it seem like a hope­less pro­ject, remem­ber the power of expo­nen­tial progress—now that we’ve conquered the tiny worm brain, an ant might hap­pen before too long, fol­lowed by a mouse, and sud­denly this will seem much more plaus­ible.

2) Try to make evol­u­tion do what it did before but for us this time.

So if we decide the smart kid’s test is too hard to copy, we can try to copy the way he stud­ies for the tests instead.

Here’s some­thing we know. Build­ing a com­puter as power­ful as the brain is possible—our own brain’s evol­u­tion is proof. And if the brain is just too com­plex for us to emu­late, we could try to emu­lateevol­u­tion instead. The fact is, even if we can emu­late a brain, that might be like try­ing to build an air­plane by copy­ing a bird’s wing-flap­ping motions—often, machines are best designed using a fresh, machine-ori­ented approach, not by mim­ick­ing bio­logy exactly.

So how can we sim­u­late evol­u­tion to build AGI? The meth­od, called “genet­ic algorithms,” would work some­thing like this: there would be a per­form­ance-and-eval­u­ation pro­cess that would hap­pen again and again (the same way bio­lo­gic­al creatures “per­form” by liv­ing life and are “eval­u­ated” by wheth­er they man­age to repro­duce or not). A group of com­puters would try to do tasks, and the most suc­cess­ful ones would be bred with each oth­er by hav­ing half of each of their pro­gram­ming merged togeth­er into a new com­puter. The less suc­cess­ful ones would be elim­in­ated. Over many, many iter­a­tions, this nat­ur­al selec­tion pro­cess would pro­duce bet­ter and bet­ter com­puters. The chal­lenge would be cre­at­ing an auto­mated eval­u­ation and breed­ing cycle so this evol­u­tion pro­cess could run on its own.

The down­side of copy­ing evol­u­tion is that evol­u­tion likes to take a bil­lion years to do things and we want to do this in a few dec­ades.

But we have a lot of advant­ages over evol­u­tion. First, evol­u­tion has no foresight and works randomly—it pro­duces more unhelp­ful muta­tions than help­ful ones, but we would con­trol the pro­cess so it would only be driv­en by bene­fi­cial glitches and tar­geted tweaks. Secondly, evol­u­tion doesn’t aim for any­thing, includ­ing intelligence—sometimes an envir­on­ment might even select against high­er intel­li­gence (since it uses a lot of energy). We, on the oth­er hand, could spe­cific­ally dir­ect this evol­u­tion­ary pro­cess toward increas­ing intel­li­gence. Third, to select for intel­li­gence, evol­u­tion has to innov­ate in a bunch of oth­er ways to facil­it­ate intelligence—like revamp­ing the ways cells pro­duce energy—when we can remove those extra bur­dens and use things like elec­tri­city. It’s no doubt we’d be much, much faster than evolution—but it’s still not clear wheth­er we’ll be able to improve upon evol­u­tion enough to make this a viable strategy.

3) Make this whole thing the computer’s prob­lem, not ours.

This is when sci­ent­ists get des­per­ate and try to pro­gram the test to take itself. But it might be the most prom­ising meth­od we have.

The idea is that we’d build a com­puter whose two major skills would be doing research on AI and cod­ing changes into itself—allowing it to not only learn but to improve its own archi­tec­ture. We’d teach com­puters to be com­puter sci­ent­ists so they could boot­strap their own devel­op­ment. And that would be their main job—figuring out how to make them­selves smarter. More on this later.

All of This Could Hap­pen Soon

Rap­id advance­ments in hard­ware and innov­at­ive exper­i­ment­a­tion with soft­ware are hap­pen­ing sim­ul­tan­eously, and AGI could creep up on us quickly and unex­pec­tedly for two main reas­ons:

1) Expo­nen­tial growth is intense and what seems like a snail’s pace of advance­ment can quickly race upwards—this GIF illus­trates this concept nicely:

2) When it comes to soft­ware, pro­gress can seem slow, but then one epi­phany can instantly change the rate of advance­ment (kind of like the way sci­ence, dur­ing the time humans thought the uni­verse was geo­centric, was hav­ing dif­fi­culty cal­cu­lat­ing how the uni­verse worked, but then the dis­cov­ery that it was helio­centric sud­denly made everything much easi­er). Or, when it comes to some­thing like a com­puter that improves itself, we might seem far away but actu­ally be just one tweak of the sys­tem away from hav­ing it become 1,000 times more effect­ive and zoom­ing upward to human-level intel­li­gence.

The Road From AGI to ASI

At some point, we’ll have achieved AGI—computers with human-level gen­er­al intel­li­gence. Just a bunch of people and com­puters liv­ing togeth­er in equal­ity.

Oh actu­ally not at all.

The thing is, AGI with an identic­al level of intel­li­gence and com­pu­ta­tion­al capa­city as a human would still have sig­ni­fic­ant advant­ages over humans. Like:

Hard­ware:

  • Speed. The brain’s neur­ons max out at around 200 Hz, while today’s micro­pro­cessors (which are much slower than they will be when we reach AGI) run at 2 GHz, or 10 mil­lion times faster than our neur­ons. And the brain’s intern­al com­mu­nic­a­tions, which can move at about 120 m/​s, are hor­ribly out­matched by a computer’s abil­ity to com­mu­nic­ate optic­ally at the speed of light.
  • Size and stor­age. The brain is locked into its size by the shape of our skulls, and it couldn’t get much big­ger any­way, or the 120 m/​s intern­al com­mu­nic­a­tions would take too long to get from one brain struc­ture to anoth­er. Com­puters can expand to any phys­ic­al size, allow­ing far more hard­ware to be put to work, a much lar­ger work­ing memory (RAM), and a longterm memory (hard drive stor­age) that has both far great­er capa­city and pre­ci­sion than our own.
  • Reli­ab­il­ity and dur­ab­il­ity. It’s not only the memor­ies of a com­puter that would be more pre­cise. Com­puter tran­sist­ors are more accur­ate than bio­lo­gic­al neur­ons, and they’re less likely to deteri­or­ate (and can be repaired or replaced if they do). Human brains also get fatigued eas­ily, while com­puters can run non­stop, at peak per­form­ance, 247.

Soft­ware:

  • Edit­ab­il­ity, upgrad­ab­il­ity, and a wider breadth of pos­sib­il­ity. Unlike the human brain, com­puter soft­ware can receive updates and fixes and can be eas­ily exper­i­mented on. The upgrades could also span to areas where human brains are weak. Human vis­ion soft­ware is superbly advanced, while its com­plex engin­eer­ing cap­ab­il­ity is pretty low-grade. Com­puters could match the human on vis­ion soft­ware but could also become equally optim­ized in engin­eer­ing and any oth­er area.
  • Col­lect­ive cap­ab­il­ity. Humans crush all oth­er spe­cies at build­ing a vast col­lect­ive intel­li­gence. Begin­ning with the devel­op­ment of lan­guage and the form­ing of large, dense com­munit­ies, advan­cing through the inven­tions of writ­ing and print­ing, and now intens­i­fied through tools like the inter­net, humanity’s col­lect­ive intel­li­gence is one of the major reas­ons we’ve been able to get so far ahead of all oth­er spe­cies. And com­puters will be way bet­ter at it than we are. A world­wide net­work of AI run­ning a par­tic­u­lar pro­gram could reg­u­larly sync with itself so that any­thing any one com­puter learned would be instantly uploaded to all oth­er com­puters. The group could also take on one goal as a unit, because there wouldn’t neces­sar­ily be dis­sent­ing opin­ions and motiv­a­tions and self-interest, like we have with­in the human pop­u­la­tion.10

AI, which will likely get to AGI by being pro­grammed to self-improve, wouldn’t see “human-level intel­li­gence” as some import­ant milestone—it’s only a rel­ev­ant mark­er from our point of view—and wouldn’t have any reas­on to “stop” at our level. And giv­en the advant­ages over us that even human intel­li­gence-equi­val­ent AGI would have, it’s pretty obvi­ous that it would only hit human intel­li­gence for a brief instant before racing onwards to the realm of super­i­or-to-human intel­li­gence.

This may shock the shit out of us when it hap­pens. The reas­on is that from our per­spect­ive, A) while the intel­li­gence of dif­fer­ent kinds of anim­als var­ies, the main char­ac­ter­ist­ic we’re aware of about any animal’s intel­li­gence is that it’s far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:

Intelligence

So as AI zooms upward in intel­li­gence toward us, we’ll see it as simply becom­ing smarter, for an anim­al.Then, when it hits the low­est capa­city of humanity—Nick Bostrom uses the term “the vil­lage idiot”—we’ll be like, “Oh wow, it’s like a dumb human. Cute!” The only thing is, in the grand spec­trum of intel­li­gence, all humans, from the vil­lage idi­ot to Ein­stein, are with­in a very small range—so just after hit­ting vil­lage idi­ot-level and being declared to be AGI, it’ll sud­denly be smarter than Ein­stein and we won’t know what hit us:

Intelligence2

And what happens…after that?

An Intel­li­gence Explo­sion

I hope you enjoyed nor­mal time, because this is when this top­ic gets unnor­mal and scary, and it’s gonna stay that way from here for­ward. I want to pause here to remind you that every single thing I’m going to say is real—real sci­ence and real fore­casts of the future from a large array of the most respec­ted thinkers and sci­ent­ists. Just keep remem­ber­ing that.

Any­way, as I said above, most of our cur­rent mod­els for get­ting to AGI involve the AI get­ting there by self-improve­ment. And once it gets to AGI, even sys­tems that formed and grew through meth­ods that didn’t involve self-improve­ment would now be smart enough to begin self-improv­ing if they wanted to.3

And here’s where we get to an intense concept: recurs­ive self-improve­ment. It works like this—

An AI sys­tem at a cer­tain level—let’s say human vil­lage idiot—is pro­grammed with the goal of improv­ing its own intel­li­gence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intel­li­gence, with an Ein­stein-level intel­lect, it has an easi­er time and it can make big­ger leaps. These leaps make it much smarter than any human, allow­ing it to make evenbig­ger leaps. As the leaps grow lar­ger and hap­pen more rap­idly, the AGI soars upwards in intel­li­gence and soon reaches the super­in­tel­li­gent level of an ASI sys­tem. This is called an Intel­li­gence Explo­sion,11and it’s the ulti­mate example of The Law of Accel­er­at­ing Returns.

There is some debate about how soon AI will reach human-level gen­er­al intelligence—the medi­an year on a sur­vey of hun­dreds of sci­ent­ists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you con­sider that many of the thinkers in this field think it’s likely that the pro­gres­sion from AGI to ASI hap­pens very quickly. Like—this could hap­pen:

It takes dec­ades for the first AI sys­tem to reach low-level gen­er­al intel­li­gence, but it finally hap­pens. A com­puter is able to under­stand the world around it as well as a human four-year-old. Sud­denly, with­in an hour of hit­ting that mile­stone, the sys­tem pumps out the grand the­ory of phys­ics that uni­fies gen­er­al relativ­ity and quantum mech­an­ics, some­thing no human has been able to defin­it­ively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intel­li­gent than a human.

Super­in­tel­li­gence of that mag­nitude is not some­thing we can remotely grasp, any more than a bumble­bee can wrap its head around Keyne­sian Eco­nom­ics. In our world, smart means a 130 IQ and stu­pid means an 85 IQ—we don’t have a word for an IQ of 12,952.

What we do know is that humans’ utter dom­in­ance on this Earth sug­gests a clear rule: with intel­li­gence comes power. Which means an ASI, when we cre­ate it, will be the most power­ful being in the his­tory of life on Earth, and all liv­ing things, includ­ing humans, will be entirely at its whim—and this might hap­penin the next few dec­ades.

If our mea­ger brains were able to invent wifi, then some­thing 100 or 1,000 or 1 bil­lion times smarter than we are should have no prob­lem con­trolling the pos­i­tion­ing of each and every atom in the world in any way it likes, at any time—everything we con­sider magic, every power we ima­gine a supreme God to have will be as mundane an activ­ity for the ASI as flip­ping on a light switch is for us. Cre­at­ing the tech­no­logy to reverse human aging, cur­ing dis­ease and hun­ger and even mor­tal­ity, repro­gram­ming the weath­er to pro­tect the future of life on Earth—all sud­denly pos­sible. Also pos­sible is the imme­di­ate end of all life on Earth. As far as we’re con­cerned, if an ASI comes to being, there is now an omni­po­tent God on Earth—and the all-import­ant ques­tion for us is:

 

Will it be a nice God?

admin has written 133 articles