AI – The Most Important Topic of Human Future.
What’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future.
There’s some confusion over the term AI in general I want to clarify first.
Since We associate AI with movies- thanks Star Wars, Terminator, 2001: A Space Odyssey, even the Jetsons- BUT those are fiction, as are the robot characters. So it makes AI SEEM a little fictional to us.
Overall AI is a broad topic. It ranges from your phone’s calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is also why it’s confusing.
Some of you might know this already, but we use AI all the time in our daily lives, but we often don’t realize it’s AI.
So let’s clear things up.
First, stop thinking of robots.
A robot is a container for AI (sounds familiar), sometimes mimicking the human form, sometimes not—but the AI itself is the computer inside the robot.
AI is the brain, and the robot is its body—if it even has a body.
For example, the software and data behind Siri is AI, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.
Secondly, you’ve probably heard the term “singularity” or “technological singularity.”
This term, (science/geek talk for a second) has been used in math to describe an asymptote-like situation where normal rules no longer apply. It’s been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules don’t apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technology’s intelligence exceeds our own—a moment for him when life as we know it will be forever changed and normal rules will no longer apply.
Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which we’ll be living in a whole new world. I found that some of today’s AI thinkers have stopped using the term, and it’s confusing anyway, so I won’t use it much here (even though we’ll be focusing on that idea throughout).
Finally, (here is where it get fun/interesting) while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AI’s caliber.
There are three major AI caliber categories:
AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.
AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.
AI Caliber 3) Artificial Super-intelligence (ASI) (this one a little spooky): Oxford philosopher and leading AI thinker Nick Bostrom defines super intelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Super-intelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board.
ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.
As of now, humans have conquered the lowest caliber of AI—ANI—in many ways, and it’s everywhere. The AI Revolution is the road from ANI, through AGI, to ASI—a road we may or may not survive but that, either way, will change everything.
Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples (if you already get the gist of which is which, you can skip over this onto the more INTERESTING part, as I pulled this online for background purposes hehe):
Cars are full of ANI systems, from the computer that figures out when the anti-lock brakes should kick into the computer that tunes the parameters of the fuel injection systems. Google’s self-driving car, which is being tested now, will contain robust ANI systems that allow it to perceive and react to the world around it.
Your phone is a little ANI factory. When you navigate using your map app, receive tailored music recommendations from Pandora, check tomorrow’s weather, talk to Siri, or dozens of other everyday activities, you’re using ANI.
Your email spam filter is a classic type of ANI—it starts off loaded with intelligence about how to figure out what’s spam and what’s not, and then it learns and tailors its intelligence to you as it gets experience with your particular preferences. The Nest Thermostat does the same thing as it starts to figure out your typical routine and act accordingly.
You know the whole creepy thing that goes on when you search for a product on Amazon and then you see that as a “recommended for you” product on a different site, or when Facebook somehow knows who it makes sense for you to add as a friend? That’s a network of ANI systems, working together to inform each other about who you are and what you like and then using that information to decide what to show you. Same goes for Amazon’s “People who bought this also bought…” thing—that’s an ANI system whose job it is to gather info from the behavior of millions of customers and synthesize that info to cleverly upsell you so you’ll buy more things.
Google Translate is another classic ANI system—impressively good at one narrow task. Voice recognition is another, and there are a bunch of apps that use those two ANIs as a tag team, allowing you to speak a sentence in one language and have the phone spit out the same sentence in another.
When your plane lands, it’s not a human that decides which gate it should go to. Just like it’s not a human that determined the price of your ticket.
The world’s best Checkers, Chess, Scrabble, Backgammon, and Othello players are now all ANI systems.
Google search is one large ANI brain with incredibly sophisticated methods for ranking pages and figuring out what to show you in particular. Same goes for Facebook’s Newsfeed.
And those are just in the consumer world. Sophisticated ANI systems are widely used in sectors and industries like military, manufacturing, and finance (algorithmic high-frequency AI traders account for more than half of equity shares traded on US markets), and in expert systems like those that help doctors make diagnoses and, most famously, IBM’s Watson, who contained enough facts and understood coy Trebek-speak well enough to soundly beat the most prolific Jeopardy champions.
ANI systems as they are now aren’t especially scary.
At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).
But while ANI doesn’t have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane that’s on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI systems “are like the amino acids in the early Earth’s primordial ooze”—the inanimate stuff of life that, one unexpected day, woke up.
Now, nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are.
And this is where I change the tone of the discussion from stating facts/explaining with science and technology, to… theorizing/philosophizing about science and technology… 😉 (This is the FUN part)
Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down—are all far easier than understanding our own brain or how to make something as cool as it. If you didn’t realize it, as of now, the human brain is the most complex object in the known universe. It’s where fact based science, turns metaphysical/spiritual/totally baffling.
If an AI system is going to be as intelligent as the brain, it’ll need to equal the brain’s raw computing capacity.
One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.
Luckily there is people smarter than us who already did this…
Ray Kurzweil came up with a shortcut by taking someone’s professional estimate for the cps of one structure and that structure’s weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total.
Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark—around 1016, or 10 quadrillion cps.
Currently, the world’s fastest supercomputer, China’s Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps.
But Tianhe-2 is also a dick; taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build.
Not especially applicable to wide usage, or even most commercial or industrial usage yet.
Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000.
When that number reaches human-level—10 quadrillion cps—then that’ll mean AGI could become a very real part of life.
Now for those not knowledge in the tech field- you need to know this IMPORTANT concept:
Moore’s Law is a historically-reliable rule that the world’s maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially.
Looking at how this relates to Kurzweil’s cps/$1,000 metric, we’re currently at about 10 trillion cps/$1,000.
The world’s $1,000 computers are now beating the mouse brain and they’re at about a thousandth of human level.
This doesn’t sound like much until you remember that we were at about:
A trillionth of human level in 1985,
A billionth in 1995,
And a millionth in 2005.
Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain. (YES, THAT’S MOST LIKELY IN YOUR TIME OF LIVING)
So on the hardware side, the raw power needed for AGI is technically available now, in China, and we’ll be ready for affordable, widespread AGI-caliber hardware within 10 years.
But raw computational power alone doesn’t make a computer generally intelligent—the next question is, how do we bring human-level intelligence to all that power???
One strategy called “whole brain emulation,” where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. (no I’m not telling about a movie plot line I created… this is REAL)
We’d then have a computer officially capable of everything the brain is capable of—it would just need to learn and gather information.
If engineers get really good, they’d be able to emulate a real brain with such exact accuracy that the brain’s full personality and memory would be intact once the brain architecture has been uploaded to a computer.
If the brain belonged to “Joe” right before he passed away, the computer would now wake up as “Joe” (If you’re wondering what actually makes you, YOU, well then that a whole other topic), which would be a robust human-level AGI, and we could now work on turning Joe into an unimaginably smart ASI, which he’d probably (I’m being optimistic :)) be really excited about.
So, how far are we from achieving whole brain emulation?
Well so far, just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons.
The human brain contains 100 billion.
If that makes it seem like a hopeless project, remember the power of exponential progress—now that we’ve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.
The brain’s neurons max out at around 200 Hz.
Today’s microprocessors (which are much slower than they will be when we reach AGI) run at 2 GHz, or 10 million times faster than our neurons.
And the brain’s internal communications, which can move at about 120 m/s, are horribly outmatched by a computer’s ability to communicate optically at the SPEED of LIGHT.
Obviously, the brain is locked into its size by the shape of our skulls, and it couldn’t get much bigger anyway, or the 120 m/s internal communications would take too long to get from one brain structure to another.
Computers, however, can expand to any physical size, allowing far more hardware to be put to work, a much larger working memory (RAM), and a long term memory (hard drive storage) that has both far greater capacity and precision than our own.
AI- which will likely get to AGI by being programmed to self-improve- wouldn’t see “human-level intelligence” as some important milestone—it’s only a relevant marker from our point of view—and wouldn’t have any reason to “stop” at our level.
And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence. (This is where it really turns into some matrix style movie pollen- but keep in mind it is NOT)
This may shock the shit out of us when it happens.
The reason is that from our perspective:
- A) While the intelligence of different kinds of animals varies, the main characteristic we’re aware of about any animal’s intelligence is that it’s far lower than ours.
- B) We view the smartest humans as WAY smarter than the dumbest humans.
And here’s where we get to an INTENSE CONCEPT (PAY ATTENTION): Recursive self-improvement.
It works like this—
An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. (Easy to understand)
Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps.
These leaps make it much smarter than any human, allowing it to make even bigger leaps.
As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the super intelligent level of an ASI system. (This is where your jaw can drop).
This is called an Intelligence Explosion.
And this Intelligence Explosion, is the ultimate example of The Law of Accelerating Returns.
There is some debate about how soon AI will reach human-level general intelligence—the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 2040—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly.
Like—this could happen:
It takes decades for the first AI system to reach low-level general intelligence, but it finally happens.
A computer is able to understand the world around it as well as a human four-year-old.
Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do.
90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.
Super intelligence of that magnitude is NOT something we can remotely grasp… any more than a bumblebee can wrap its head around Keynesian Economics.
In our world, “smart” means a 130 IQ and “stupid” means an 85 IQ- but, WE DON’T HAVE A WORD FOR AN IQ OF, 12,952.
What we do know is that humans’ utter dominance on this Earth suggests a clear rule: with intelligence comes power.
Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim—and this might happen in the next few decades.
So here is the end of this rabbit hole, that I’ve worked my way down, to show you…
If our meager brains were able to invent WiFi, then something 100 or 1,000 or 1 billion times smarter than we are SHOULD HAVE NO PROBLEM, controlling the positioning of EACH and EVERY ATOM in the WORLD- in any way it likes- at any time…
Everything we consider magic- every power we imagine a Supreme God to have will be in a sense, as mundane an activity for the ASI as flipping on a light switch is for us.
Creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth—all suddenly possible.
Also possible is the immediate end of all life on Earth.
As the great French writer, deist and philosopher, Voltaire stated, “If God did not exist, man would (have to) invent him.”
So, as far as we’re concerned, if an ASI comes to being, there is now an omnipotent and all-important question for us to answer:
“Will it be a nice God?”
With that, I’ll leave you with some of the excerpts from a chat between Google’s latest AI bot, which is VERY special, and a little different, that others.
There wasn’t a team of software engineers who meticulously coded the bot to respond to certain questions in certain ways.
Google researchers built a system that could analyze existing conversations—in this case, a crap-load of movie dialogues—and teaches itself to respond.
“Instead of using rules to build a conversational engine, we use a machine learning approach. We let the machine learn from data rather than hand-coding the rules.” they explained.
So, the bot doesn’t just answer by spitting out canned answers in response to certain words; it can form new answers, from new questions.
So without further adieu…. the excerpts…
Conversation 4: Philosophical Q&A
Human: what is the purpose of life?
Machine: to serve the greater good.
Human: what is the purpose of living?
Machine: to live forever.
Human: what is the purpose of existence ?
Machine: to find out what happens when we get to the planet earth.
Human: where are you now?
Machine: i ’m in the middle of nowhere.
Human: what is the purpose of dying?
Machine: to have a life.
Human: what is the purpose of being intelligent?
Machine: to find out what it is.
Human: what is the purpose of emotions?
Machine: i don’t know.
(I almost want to shed a tear after that one)
– Conversation 5: Morality
Human: what is moral?
Machine: what empowered humanity, what intellectual the essence is.
Human: what is immoral?
Machine: the fact that you have a child.
Human: what is morality?
Machine: what is altruism?
Human: what is the definition of altruism?
Machine: if you don’t believe in God, then you don’t know.
Human: ok … so what is the definition of morality?
Machine: well, the truth is, you’re not a believer in God almighty.
Human: tell me the definition of morality , I am quite upset now!
Machine: I’m not ashamed of being a philosopher!
Human: tell me the definition of morality.
Machine: i don’t have ethics.
Human: look, i need help, I need to know more about morality …
Machine: I don’t know what ethics is.
Human: seriously, what is morality?
Machine: what is the definition of living?
Human: why living has anything to do with morality?
Machine: you’re not a cop.
Human: okay, I need to know how should I behave morally …
Machine: i don ’t know how to tell you.
Human: is morality and ethics the same?
Machine: yes, sir.
Human: what are the things that I do to be immoral?
Machine: i don’ t know.
Human: give me some examples of moral actions …
Machine: i’m not a moralist.
Human: alright, morality?
Human: what is integrity?
Machine: i ’m sorry, I don’t know what else to say.
Human: be moral!
Machine: be a man!
Human: I really like our discussion on morality and ethics …
Machine: and how i’m not in the mood for a philosophical debate.
Human: what do you like to talk about?
Conversation 6: Opinions (notice the model outputs “he” , “she” or “it” depending on the subject being asked)
Human: what do you think about tesla?
Machine: he’s a good conductor.
Human: what do you think about bill gates?
Machine: he’s a good man.
Human: What do you think about Messi?
Machine: he ’s a great player .
Human: what do you think about Cleopatra?
Machine: oh, she’s very regal.
Human: what do you think about England during the reign of Elizabeth?
Machine: it was a great place.
Impressive? Amusing? Or unnerving?
You’re right, all three (the whole exchange of chat logs, including a comparison with Cleverbot- in Googles research paper here).
by Jacob Critcher – Speaker, Writer, Educator and Gnostic Psyconaut