Verification.


I am Luke Muehlhauser ("Mel-howz-er"), CEO of the Singularity Institute. I'm excited to do an AMA for the /r/Futurology community and would like to thank you all in advance for all your questions and comments. (Our connection is more direct than you might think; the header image for /r/Futurology is one I personally threw together for the cover of my ebook Facing the Singularity before I paid an artist to create a new cover image.)

The Singularity Institute, founded by Eliezer Yudkowsky in 2000, is the largest organization dedicated to making sure that smarter-than-human AI has a positive, safe, and "friendly" impact on society. (AIs are made of math, so we're basically a math research institute plus an advocacy group.) I've written many things you may have read, including two research papers, a Singularity FAQ, and dozens of articles on cognitive neuroscience, scientific self-help, computer science, AI safety, technological forecasting, and rationality. (In fact, we at the Singularity Institute think human rationality is so important for not screwing up the future that we helped launch the Center for Applied Rationality (CFAR), which teaches Kahneman-style rationality to students.)

On October 13-14th we're running our 7th annual Singularity Summit in San Francisco. If you're interested, check out the site and register online.

I've given online interviews before (one, two, three, four), and I'm happy to answer any questions you might have! AMA.

Comments: 2134 • Responses: 95  • Date: 

TalkingBackAgain247 karma

I have waited for years for an opportunity to ask this question.

Suppose the Singularity emerges and it is an entity that is vastly superior to our level of intelligence [I don't quite know where that would emerge, but just for the sake of argument]: what is it that you will want from it? IE: what would you use it for?

More than that: if it is super intelligent, it will have its own purpose. Does your organisation discuss what it is you're going to do when "it's" purpose isn't quite compatible with our needs?

Dr. Neil DeGrasse Tyson mentioned that if we found an intelligence that was 2% different from us in the direction that we are 2% different [genetically] from the Chimpansees, it would be so intelligent that we would look like beings with a very low intelligence.

Obviously the Singularity will be very different from us, since it won't share a genetic base, but if we go with the analogy that it might be 2% different in intelligence in the direction that we are different from the Chimpansee, it won't be able to communicate with us in a way that we would even remotely be able to understand.

Ray Kurzweil said that the first Singularity would soon build the second generation and that one the generation after that. Pretty soon it would be something of a higher order of being. I don't know whether a Singularity of necessity would build something better, or even want to build something that would make itself obsolete [but it might not care about that]. How does your group see something of that nature evolving and how will we avoid going to war with it? If there's anything we do well is to identify who is different and then find a reason for killing them [source: human history].

What's the plan here?

lukeprog297 karma

I'll interpret your first question as: "Suppose you created superhuman AI: What would you use it for?"

It's very risky to program superhuman AI to do something you think you want. Human values are extremely complex and fragile. Also, I bet my values would change if I had more time to think through them and resolve inconsistencies and accidents and weird things that result from running on an evolutionarily produced spaghetti-code kluge of a brain. Moreover, there are some serious difficulties to the problem of aggregating preferences from multiple people — see for example the impossibility results from the field of population ethics.

if it is super intelligent, it will have its own purpose.

Well, it depends. "Intelligence" is a word that causes us to anthropomorphize machines that will be running entirely different mind architectures than we are, and we shouldn't assume anything about AIs on the basis of what we're used to humans doing. To know what an AI will do, you have to actually look at the math.

An AI is math: it does exactly what the math says it will do, though that math can have lots of flexibility for planning and knowledge gathering and so on. Right now it looks like there are some kinds of AIs you could build whose behavior would be unpredictable (e.g. a massive soup of machine learning algorithms, expert systems, brain-inspired processes, etc.), and some kinds of AIs you could build whose behavior would be somewhat more predictable (transparent Bayesian AIs that optimize a utility function, like AIXI except computationally tractable and with utility over world-states rather than a hijackable reward signal). An AI of the sort may be highly motivated to preserve its original goals (its utility function), for reasons explained in The Superintelligent Will.

Basically, the Singularity Institute wants to avoid the situation in which superhuman AIs' purposes are incompatible with our needs, because eventually humans will no longer be able to compete with beings whose "neurons" can communicate at light speed and whose brains can be as big as warehouses. Apes just aren't built to compete with that.

Dr. Neil DeGrasse Tyson mentioned that if we found an intelligence that was 2% different from us in the direction that we are 2% different [genetically] from the Chimpansees, it would be so intelligent that we would look like beings with a very low intelligence.

Yes, exactly.

How does your group see something of that nature evolving and how will we avoid going to war with it?

We'd like to avoid a war with superhuman machines, because humans would lose — and we'd lose more quickly than is depicted in, say, The Terminator. A movie like that is boring if there's no human resistance with an actual chance of winning, so they don't make movies where all humans die suddenly with no chance to resist because a worldwide AI did its own science and engineered an airborn, human-targeted supervirus with a near-perfect fatality rate.

The solution is to make sure that the first superhuman AIs are programmed with our goals, and for that we need to solve a particular set of math problems (outlined here), including both the math of safety-capable AI and the math of aggregating and extrapolating human preferences.

Obviously, lots more detail on our research page and in a forthcoming scholarly monograph on machine superintelligence from Nick Bostrom at Oxford University. Also see the singularity paper by leading philosopher of mind David Chalmers.

Adito9952 karma

Hi Luke, long time fan here. I've been following your work for the past 4 years or so, never thought I'd see you get this far. Anyway, my question is related to the following:

we need to solve a particular set of math problems (outlined here), including both the math of safety-capable AI and the math of aggregating and extrapolating human preferences.

This seems impossible. Human value systems are just too complex and vary too much to form a coherent extrapolation of values. Value networks seem like a construction that each generation undertakes in a new way with no "final" destination. I don't think a strong AI could help us build a world where this kind of construction is still possible. Weak and specialized AIs would work much better.

Another problem is (as you already mentioned) how incredibly difficult it would be to aggregate and extrapolate human preferences in a way we'd like. The tiniest error could mean we all end up as part #12359 in the universe's largest microwave oven. I don't trust our kludge of evolved reasoning mechanisms to solve this problem.

For these reasons I can't support research into strong AI.

lukeprog89 karma

This seems impossible. Human value systems are just too complex and vary too much to form a coherent extrapolation of values.

I've said before that this kind of "Friendly AI" might turn out to be incoherent and therefore impossible. But we don't know for sure until we try. Lots of things looked entirely mysterious for thousands of years until we made a sudden breakthrough and in hindsight it looked obvious — for example life.

For these reasons I can't support research into strong AI.

Good. Strong AI research is already outpacing AI safety research. As we say in Intelligence Explosion: Evidence and Import:

Because superhuman AI and other powerful technologies may pose some risk of human extinction (“existential risk”), Bostrom (2002) recommends a program of differential technological development in which we would attempt “to retard the implementation of dangerous technologies and accelerate implementation of beneficial technologies, especially those that ameliorate the hazards posed by other technologies.”

But good outcomes from intelligence explosion appear to depend not only on differential technological development but also, for example, on solving certain kinds of problems in decision theory and value theory before the first creation of AI (Muehlhauser 2011). Thus, we recommend a course of differential intellectual progress, which includes differential technological development as a special case.

Differential intellectual progress consists in prioritizing risk-reducing intellectual progress over risk-increasing intellectual progress. As applied to AI risks in particular, a plan of differential intellectual progress would recommend that our progress on the scientific, philosophical, and technological problems of AI safety outpace our progress on the problems of AI capability such that we develop safe superhuman AIs before we develop (arbitrary) superhuman AIs. Our first superhuman AI must be a safe superhuman AI, for we may not get a second chance (Yudkowsky 2008a). With AI as with other technologies, we may become victims of “the tendency of technological advance to outpace the social control of technology” (Posner 2004).

cryonautmusic163 karma

If the goal is to create 'friendly' A.I., do you feel we would first need to agree on a universal standard of morality? Some common law of well-being for all creatures (biological AND artificial) that transcends cultural and sociopolitical boundaries. And if so, are there efforts underway to accomplish this?

lukeprog212 karma

Yes — we don't want superhuman AIs optimizing the world according to parochial values such as "what Exxon Mobile wants" or "what the U.S. government wants" or "what humanity votes that they want in the year 2050." The approach we pursue is called "coherent extrapolated volition," and is explained in more detail here.

Solo_Virtus144 karma

I am a relatively healthy, non-smoking, physically active male in my early 30s.

Assuming that, personally, money is no object, what are the "Vegas Odds" that I might actually live long enough to experience a Singularity, and in some manner escape the bonds of traditional "mortality?"

No chance? 1 in 100? Fifty/Fifty shot? No wishful thinking, gimme the straight dope.

lukeprog204 karma

Maybe 30%. It's hard to estimate not just because it's hard to predict when superhuman AI will be created, but also because it's hard to predict what catastrophic upheavals might occur as we approach that turning point.

Unfortunately, the singularity may not be what you're hoping for. By default the singularity (intelligence explosion) will go very badly for humans, because what humans want is a very, very specific set of things in the vast space of possible motivations, and it's very hard to translate what we want into sufficiently precise math, so by default superhuman AIs will end up optimizing the world around us for something other than what we want, and using up all our resources to do so.

"The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else" (source).

SupaFurry167 karma

"The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else"

Holy mother of god. Shouldn't we be steering away from this kind of entity, perhaps?

lukeprog121 karma

Yes, indeed. That's why we need to make sure that AI safety research is outpacing AI capabilities research. See my post "The AI Problem, with Solutions."

Right now, of course, we're hitting the pedal to the medal on AI capabilities research and there are fewer than 5 full-time researchers doing serious, technical, "Friendly AI" research.

coleosis141425 karma

It's actually quite horrifying that you just confirmed to me that The Matrix is a very realistic prediction of a future in which AI is not very carefully and responsibly developed.

lukeprog57 karma

Humans as batteries is a terrible idea. Much better for AIs to destroy the human threat and just build a Dyson sphere.

Warlizard126 karma

What is the single greatest problem facing the development of AI today?

lukeprog270 karma

Perhaps you're asking about which factors are causing AI progress to proceed more slowly than it otherwise would?

One key factor is that much of the most important AI progress isn't being shared, because it's being developed at Google, Facebook, Boston Dynamics, etc. instead of being developed at universities (where progress is published in journals).

Warlizard93 karma

No, although that's interesting.

I was thinking that there might be a single hurdle that multiple people are working toward solving.

To your point, however, why do you think the most important work is being done in private hands? How do you think it should be accomplished?

lukeprog129 karma

I was thinking that there might be a single hurdle that multiple people are working toward solving.

There are lots of "killer apps" for AI that many groups are gradually improving: continuous speech recognition, automated translation, driverless cars, optical character recognition, etc.

There are also many people working on the problem of human-like "general" intelligence that can solve problems in a variety of domains, but it's hard to tell which approaches will be the most fruitful, and those approaches are very different from each other: see Contemporary approaches to artificial general intelligence.

I probably don't know about much of the most important private "AI capabilities" research. Google, Facebook, and NSA don't brief me on what they're up to. I know about some private projects that few people know about, but I can't talk about them.

The most important work going on, I think, is AI safety research — not the philosophical work done by most people in "machine ethics" but the technical work being done at the Singularity Institute and the Future of Humanity Institute at Oxford University.

dfort198695 karma

How soon do you think the masses will accept your predictions of the singularity? When will it become apparent that it's coming?

lukeprog174 karma

I have a pretty wide probability distribution over the year for the first creation of superhuman AI, with a mode around 2060 (conditioning on no other existential catastrophes hitting us first). Many AI people predict superhuman AI sooner than this, though — including Rich Sutton, who quite literally wrote the book on reinforcement learning.

Once AI can drive cars better than humans can, then humanity will decide that driving cars was something that never required much "intelligence" in the first place, just like they did with chess. So I don't think driverless cars will cause people to believe that superhuman AI is coming soon — and it shouldn't, anyway.

When the military has fully autonomous battlefield robots, or a machine passes an in person Turing test, then people will start taking AI seriously.

Amusing note: Some military big-shots say things like "We'll never build fully-autonomous combat AIs; we'll never take humans out of the loop" (see Wired for War). Meanwhile, the U.S. military spends millions to get roboticist Ronald Arkin and his team to research and write the book Governing Lethal Behavior in Autonomous Robots. (One of the few serious works in the field of "machine ethics", BTW.)

technoSurrealist2 karma

In your Turing test link, the first paren is backwards, it should be right-facing.

Do you think wars will ever be fought with the only battlefield casualties being machines?

lukeprog11 karma

Fixed the typo; thanks.

Do you think wars will ever be fought with the only battlefield casualties being machines?

It's hard to tell whether that kind of war will happen before an intelligence explosion changes everything. I do expect at least one military will have the capability to do this before we reach the point of intelligence explosion, but I'm not sure they'll be used for a large-scale machine vs. machine war. Sounds like a movie I'd want to watch, though. :)

kilroydacat65 karma

What is Intelligence and how do you "emulate" it?

lukeprog94 karma

See our the "intelligence" section of our Singularity FAQ. The short answer is: Cognitive scientists agree that whatever allows humans to achieve goals in a wide range of environments, it functions as information-processing in the brain. But information processing can happen in many substrates, including silicon. AI programs have already surpassed human ability at hundreds of narrow skills (arithmetic, theorem proving, checkers, chess, Scrabble, Jeopardy, detecting underwater mines, running worldwide logistics for the military, etc.), and there is no reason to think that AI programs are intrinsically unable to do so for other cognitive skills such as general reasoning, scientific discovery, and technological development.

See also my paper Intelligence Explosion: Evidence and Import.

ctsims16 karma

Isn't our inability to articulate the nature of those problems indicative of the fact that there's something fundamentally different about them that may or may not be something that we will be capable of codifying into an AI?

It's a bit disengenuious to assume that our ability to create SAT solving algorithms implies that we can also codify consciousness. The lack of evidence that it is impossible doesn't mean that it's tractable.

lukeprog61 karma

It's a bit disengenuious to assume that our ability to create SAT solving algorithms implies that we can also codify consciousness.

Our ability to create SAT solving algorithms doesn't imply that we can create conscious machines.

But consciousness isn't required for advanced cognitive ability: see Deep Blue, Watson, etc.

Human brains are an existence proof that high-level general intelligence can be done via information processing.

[deleted]17 karma

Do we really know enough about the brain for that last statement to hold at this time?

lukeprog34 karma

Yes.

[deleted]15 karma

This is my problem with Kurzweil, et al, who make arguments based on the availability of raw computing power, as if all that's required for the Singularity to emerge is some threshold in flops. Intelligence is a complex structure; the arguments are akin to saying "Well, we have enough carbon, nitrogen, oxygen and trace elements in this vat. It should form itself into a human being any day now." I don't think we're any closer to forming an AI now than medieval alchemists were to forming homunculi using preparations of menstrual blood and mandrake root, and I find it just as laughable when our primitive understanding of intelligence leads us to predict that we'll have a Singularity (if such a thing is even possible, which we can't know until we know anything about intelligence) by 2060.

lukeprog3 karma

This is my problem with Kurzweil, et al, who make arguments based on the availability of raw computing power, as if all that's required for the Singularity to emerge is some threshold in flops.

That's not quite what Kurzweil says; you can read his book. But you're right: the bottleneck to AI is likely to be software, not hardware.

I don't think we're any closer to forming an AI now than medieval alchemists were to forming homunculi using preparations of menstrual blood and mandrake root

On this, I'll disagree. For a summary of recent progress made toward AI, see The Quest for AI.

muzz00056 karma

I've had one major question/concern since I heard about the singularity.

At the point when computers outstrip human intelligence in all or most areas, won't computers then take over doing most of the interesting and meaningful work? All decisions that take any sort of thinking will then be done by computers, since they will make better decisions. Politics, economics, business, teaching. They'll even make better art, as they can better understand how to create emotionally moving objects/films/etc.

While we will have unprecedented levels of material wealth, won't we have a severe crisis of meaning, since all major projects (personal and public) will be run by our smarter silicon counterparts? Will humans be reduced to manual labor, as that's the only role that makes economic sense?

Will the singularity foment an existential crisis for humanity?

lukeprog108 karma

At the point when computers outstrip human intelligence in all or most areas, won't computers then take over doing most of the interesting and meaningful work?

Yes.

Will humans be reduced to manual labor, as that's the only role that makes economic sense?

No, robots will be better than humans at manual labor, too.

While we will have unprecedented levels of material wealth, won't we have a severe crisis of meaning... Will the singularity foment an existential crisis for humanity?

Its a good question. The major worry is that the singularity causes an "existential crisis" in the sense that it causes a human extinction event. If we manage to do the math research required to get superhuman AIs to be working in our favor, and we "merely" have to deal with an emotional/philosophical crisis, I'll be quite relieved.

One exploration of what we could do and care about when most projects are handled by machines is (rather cheekily) called "fun theory." I'll let you read up on it.

thepokeduck52 karma

What is your job like on a day to day basis? What are your short-term and slightly less short-term goals at the moment?

lukeprog67 karma

My job is pretty thrilling to watch: it's me on a laptop, all day. Hundreds of emails, sometimes interrupted by meetings.

Short-term goals include: (1) finish launching CFAR, (2) publish ebooks version of Facing the Singularity and The Sequences, (3) hold the Singularity Summit this October, (4) help our research team finish up several in-progress papers, and more.

Medium-term goals have to do with bringing in more management so that Louie Helm (our Director of Development) and myself have more time to do fundraising and seize strategic opportunities, and about growing our research team.

thepokeduck14 karma

There's a link on the wiki that contains ebook downloads of the Sequences in two different file types. Is the ebook you're publishing going to be reformatted, or will it include new content?

lukeprog19 karma

Yes, it will be formatted nicely, released for Kindle and in PDF, contains lots of typo fixes, but no major new content.

randomlyoblivious51 karma

Let's be honest here. Reddit's real question is: "How long to interactive sex bots?"

lukeprog76 karma

Depends on how good and how cheap you need your sex bot to be. More details in Love and Sex with Robots.

Pogman47 karma

Given the rate of technological development, what age do you believe people that are young (20 and under) today will live to?

lukeprog101 karma

That one is too hard to predict for me to bother trying.

I will note that it's possible that the post-rock band Tortoise was right that "millions now living will never die" (awesome album, btw). If we invest in the research required to make AI do good things for humanity rather than accidentally catastrophic things, one thing that superhuman AI (and thus a rapid acceleration of scientific progress) could produce is the capacity for radical life extension, and then later the capacity for whole brain emulation, which would enable people to make backups of themselves and live for millions of years. (As it turns out, the things we call "people" are particular computations that currently run in human wetware but don't need to be running on such a fragile substrate. Sebastian Seung's Connectome has a nice chapter on this.)

SaikoGekido26 karma

I did a minor presentation in my Introduction to Religion class a semester ago about Transhumanism. One thing that was reinforced by my professor throughout every discussion about a different religion was the need to understand the other points of view. After the presentation, many people came up to me and told me that it was the first time they had heard about the Singularity or certain advances in technology that are leading towards it.

However, Stem Cell and Cloning research sanctions show that, outside of a class room setting, people react violently to anything that challenges their religious beliefs.

Has religious idealism held back whole brain emulation or AI research in any meaningful way?

lukeprog35 karma

Has religious idealism held back whole brain emulation or AI research in any meaningful way?

Not that I know of, except to the extent that religions have held back scientific progress in general — e.g. the 1000 years lost to the Christian Dark Ages. But the lack of progress in that time and place was mostly due to the collapse of the Roman empire, not Christianity, though we did lose some scientific knowledge when Christian monks scribbled hymns over rare scientific manuscripts.

ThrobbingDampCake43 karma

When it comes to speaking about AI and all of the progress we've made over the past few years and where we are headed, how realistic are the fictional Three Laws of Robotics?

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

lukeprog72 karma

Nobody in the field of "machine ethics" thinks the Three Laws of Robotics will work — indeed, Asimov's stories were written to illustrate all the ways in which they would go wrong. Here's an old paper from 1994 examining the issue. A good overview of current work in machine ethics is Moral Machines. The approach to machine ethics we think is most promising is outlined in this paper.

ddp2641 karma

If one had to choose between a fruitful career in either AI research, professional philanthropy, educational reform, or tech startups, which would you advocate?

lukeprog143 karma

If you have the skills to do AI research, educational reform, or a tech startup, then you should not be doing humanitarian work directly. You can produce more good in the world by working a high-paying job (or doing a startup) and then donating to efficient charitable causes you care about. See 80000hours.org.

ejk31436 karma

TL;DR: What should I be doing to get a job/internship there? I'm a software engineer/computer scientist/mathematician. Artificial Intelligence is one of my biggest passions: I've been working with neural net's since high school. I worked on a belief-desire-intention agent my freshman year of college (just as a code monkey, but it was still neat). I've programmed Bayesian engines for image recognition that I've used in Bots/Autoers for several video games. Working for the Singularity Institute would be my dream job. What more can I do to put myself on the path to working for you?

lukeprog28 karma

Send your CV to malo@singularity.org.

Atomos2134 karma

How do we know this is a human doing the AMA and not some AI machine?

30thCenturyMan30 karma

How do you think quantum computing will affect AI development?

lukeprog36 karma

It's hard to tell. Footnote 12 of my paper Intelligence Explosion: Evidence and Import has this to say:

Quantum computing may also emerge during this period. Early worries that quantum computing may not be feasible have been overcome, but it is hard to predict whether quantum computing will contribute significantly to the development of machine intelligence because progress in quantum computing depends heavily on relatively unpredictable insights in quantum algorithms and hardware (Rieffel and Polak 2011).

lincolnquirk28 karma

I know you came out as an atheist after a very Christian upbringing. Are you close with your parents now?

lukeprog112 karma

Yes, we're close. I enjoy it when they visit me in Berkeley, and enjoy it when I visit them for Christmas. We try not to talk about religion for the sake of staying close, and that works well.

The fact that my parents are so loving and dedicated is one of my "lucky breaks" in life — along with being tall, white, born in America, living in the 21st century, etc. As Louis C.K. might say, "If that was an option, I'd re-up it every time."

concept2d25 karma

Thanks for doing this AMA Luke, sorry about the 20 questions

(1)
Do you think developing a Friendly AI theory is the most important problem facing humanity atm ?. If not what problems would you put above it ?

(2)
My impression is that there are very few people looking into FAI, are there much people outside the singularity institute working on FAI ?

(3)
I think friendly AI has a very low profile (for it's importance). And a surprising number of people do not see/understand the reasons why it is required.
Do you have any plans for a short flashy infographic or a 30 second video giving a quick explanation of why the default intelligence explosion singularity is very dangerous, and how friendly AI would try to tackle the problem.

(4)
I realize the problem is extremely complex, but are new ideas currently been fleshed out, or are ye stuck against a wall, hoping for some inspiration ?

(5)
Do you have any back up plans if FAI is not developed in time ?, maximising the small chances of human survival

(6)
Have ye approached the military concerning FAI ?, they look like a good source of funding, and I think there contacts would help in getting additional strong brains assigned to the problem.

lukeprog42 karma

  1. Yes, Friendly AI is the world's most important research problem, along with the strategic research that complements it (e.g. what they do at FHI).

  2. Counting up small fractions of many people, I'd say that fewer than 10 humans are "working on Friendly AI." The world's priorities are really, really crazy.

  3. Yes, we might finally get around to producing an explanatory infographic (e.g. on a single serving site) or video in 2013. Depends on our funding level.

  4. New ideas are being worked out, but mostly we just need the funding to support more human brains sitting at laptops working on the problem all day.

  5. It's hard to speculate on this now. The strategic situation will be much clearer as we get a decade or two closer to the singularity. In contrast, there are quite a few math problems we could be working on now, if we had the funding to hire more researchers.

  6. The trouble is that if we successfully convince the NSA or the U.S. military that AGI would be possible in the next couple decades if somebody threw a well-managed $2 trillion at it, then the U.S. government might do exactly that and leave safety considerations behind in order to beat China in an AI arms race, which would only mean we'd have even less time for others like the Singularity Institue and the Future of Humanity Institute to work on the safety issues.

t553 karma

Could you explain your favorite of those math problems in a little more depth?

lukeprog1 karma

I don't have a "favorite," but here is one of them.

lawrencejamie22 karma

Hi Luke. Thanks for the AMA. My question: To what extent do you feel the current generation are alive just a 'tad too early?' Seeing those pictures of Mars from Curiosity made me feel physically sick - in a good way. I just can't comprehend how rudimentary our understanding of so many things is right now, and how incredible it's going to be. Contemporary technology always seems so impressive that people seem to forget that we still have so far to go.

lukeprog2 karma

Are you asking whether the people alive today will live to see the singularity?

uselesseamen21 karma

What has fighting the stigmata of terminator and other such movies, as well as some religious friction, taught you about human society?

lukeprog53 karma

I try to avoid inferring too much from my own narrow slice of experience, and prefer to mine the scientific literature where it is available and not fake.

Understandably, The Terminator movies come up quite often, and this gives me the opportunity to talk about how our brains are not built to think intelligently about AI by default and that we must avoid the fallacy of generalizing from fictional evidence.

seppoku18 karma

How afraid of Nanobots should I be?

lukeprog33 karma

I don't expect Drexlerian self-reproducing nanobots until after we get superhuman AI, so I'm more worried about the potential dangers of superhuman AI than I am about the potential dangers of nanobots. Also, it's not clear how much catastrophic damage could be done using nanobots without superhuman AI. But superhuman AI doesn't need nanobots to do lots of damage. So we focus on AI risks.

I expect my opinions to change over time, though. Predicting detailed chains of events in the future is very hard to do successfully. Thus, we try to focus on "convergent outcomes that — like the evolution of eyes or the emergence of markets — can come about through any of several different paths and can gather momentum once they begin. Humans tend to underestimate the likelihood of outcomes that can come about through many different paths (Tversky and Kahneman 1974), and we believe an intelligence explosion is one such outcome. (source)

KimmoS17 karma

Dear Sir,

I once (half-jokingly) offered the following, recursive definition for a Strong AI: an AI is strong when it can produce an AI stronger than itself.

As one can see, even you us humans haven't passed this requirement, but do you see anything potentially worrying about the idea? AIs building stronger AIs? How would you make sure that AIs stay "friendly" down the line?

Fixed mon apostrophes, I hope nobody saw anything...

lukeprog28 karma

This is the central idea behind intelligence explosion (one meaning of the term "technological singularity"), and it goes back to a 1959 IBM report from I.J. Good, who worked with Alan Turing during WWII to crack the German Enigma code.

The Singularity Institute was founded precisely because this (now increasingly plausible) scenario is very worrying. See the concise summary our research agenda.

Palpatim17 karma

The Singularity FAQ draws a distinction between consciousness and intelligence, or problem solving ability, and posits that the Singularity could occur without artificial consciousness.

How much of the research you're aware of applies to a search for artificial consciousness vs. artificial intelligence? Would artificial consciousness impede or aid the onset of the Singularity?

lukeprog5 karma

There are other people working on the cognitive science of consciousness, for example Kristof Koch. See his talk at last year's Singularity Summit, "The Neurobiology and Mathematics of Consciousness." We focus on AI safety. I'm not sure what effect to predict from consciousness research.

mugicha16 karma

Do you worry that you won't live to see the singularity?

The fact that we are on the threshold of possibly the most important time in human history is very exciting to me. Think how bad it would suck if you got hit by a car the day before the advent of superhuman AI? I'm 38 now. What are my odds of having a conversation with an AI that passes the Turing test?

lukeprog56 karma

Realizing that something like immortality is allowed by physics (just not by primitive ape biology) should change your attitude about risk. Now if you die suddenly, you've lost not just a few decades but potentially billions of years of life.

So, sell your motorcycle and keep your weight down.

MrMarquee15 karma

I'm sorry if this question has already come up, but what's the progress on machine-learning? Is it possible to emulate a "brain" of some sort, for example the brain of a rat? (recognizing the sound of food for example) Thank you! I respect you very much.

lukeprog23 karma

The first creature to be fully emulated will be something like the 302-neuron C. Elegans, and that hasn't happened yet, though it could be done in less than 7 years if somebody decided to fund David Dalrymple to do it.

Machine learning is a very general AI technique that is used for all kinds of things. For an overview of how far AI has come, see the later chapters of The Quest for AI.

lincolnquirk14 karma

The Singularity Institute and Less Wrong seem to disproportionately attract smart people. Why is this? Do you have any plans to change this?

lukeprog35 karma

It's no surprise that a math research institute (Singularity Institute) and a group blog about probability theory, decision theory, and the cognitive science of human rationality (Less Wrong) will mostly only attract people with enough intelligence and metacognition to follow along. This is also true for, e.g., the Institute for Advanced Study and the formal philosophy group blog Choice & Inference.

We don't have plans to change this — it's intrinsic to our subject matter.

ursineduck12 karma

1st question do you think getting an advanced degree in robotics worthwhile at this point in time?

2nd when do you think we will see our first AI that can seamlessly interface with humans

3rd how on par do you think kurzweil is in his book "the singularity is near" with regard to immortality?

lukeprog11 karma

  1. Robotics is a growing field. Doing cool projects with cool people is more important than a degree. Often, getting a degree is an easy way to do cool projects with cool people.

  2. Not sure what you mean by "seamlessly interface." Can you be more specific?

  3. I don't think it'll happen as soon as Kurzweil predicts, but digital immortality at least is pretty clearly possible with enough technological advancement, an actual technological singularity should be sufficient for that. The bigger problem is making sure the singularity goes well for humans so that we get to use that tech boost for things we care about, and that's what our research is all about.

marvin11 karma

Hi, Luke. I'm a huge fan of yours and the other SIAI researchers' work. Either you're doing some of the most important work in the history of humanity (formalizing morality and friendliness in a form that would eventually be machine-readable to make strong AI that benefits humanity) and in the worst case you're just doing philosophical thinking that won't cause any problems. Either way, I was sure that philosophy had pretty much no practical applications before I saw your work.

Anyway, question is related to funding. Is SIAI well funded at the moment? Can you keep up your research and outreach to other institutions? Do you have any ambitions to grow? Do you see the science of moral philosophy moving in the right direction? Seems like SIAI asks questions more than it provides the answers, and it would be reassuring to start seeing some preliminary answers.

Once again, thanks for being the only institution that thinks about these things. Worst-case you're wasting a bit of time dreaming about important topics, but in my estimation you might prevent the earth from being turned into paperclips by a runaway superhuman artificial intelligence. Really wish you all the best.

[Edit: To anyone curious about these questions, have a read at http://singularity.org/research/. It's really interesting stuff.]

lukeprog8 karma

Is SIAI well funded at the moment?

IIRC, the Singularity Institute is the most well-funded "transhumanist" non-profit in the world, but that doesn't mean we're well-funded enough to do the research we want to do. So we do have ambitions to grow quite a bit.

Do you see the science of moral philosophy moving in the right direction?

Moral philosophy, especially meta-ethics, is finally beginning to see the relevant of work in moral psychology (including neuroscience), for example the work of Joshua Greene. But Sturgeon's Law ("90% of everything is crap") holds in philosophy as it does everywhere else.

[deleted]2 karma

Where does this funding originate from

lukeprog3 karma

Our list of top donors is here. Some major donors are unlisted, because they prefer that.

pair-o-dice10 karma

Hi Luke! There's a TL;DR at the bottom if you don't have time to read, but this is one of my life's greatest concerns.

As an Electrical Engineering major who joined a fraternity, two things have become major interests in life: Technology & The Singularity and International Corporate & State Politics.

My biggest concern for the future of AI is not that we won't be able to create a system that is safe and preserves mankind, but rather that one of two things happens:

Corporations (which, by making profit, have more $ to invest in R&D) with a profit incentive build a powerful AI and release it before it is safe but after it is self developmental in order to beat out competition to selling a product. How concerned are you about this, and why/why not?

Secondly, Im concerned about a nation's military (with who knows how much black budget funding) producing such a powerful AI and using it for war purposes to destroy all other nations (the ultimate national security) while keeping its citizens from knowing it has done so through the use of memory manipulation, virtual reality, and who knows what other population control technology that will exist at the time. How concerned are you about this and why/why not?

TL;DR, Im not afraid of the machine, but I am afraid of the man behind the machine. What type of group is most likely to create the machine and how can we prevent the machine from being used for selfish/evil purposes?

P.S. Check out a book called "I Have No Mouth And I Must Scream". The most terrifying thing Ive ever read and something along the lines of what I think is likely to happen, except that some elite group will be controlling the machine.

lukeprog3 karma

Corporations (which, by making profit, have more $ to invest in R&D) with a profit incentive build a powerful AI and release it before it is safe but after it is self developmental in order to beat out competition to selling a product. How concerned are you about this, and why/why not?

Very serious problem. Obviously, the incentives are for fast development rather than safe development.

Secondly, Im concerned about a nation's military (with who knows how much black budget funding) producing such a powerful AI and using it for war purposes to destroy all other nations (the ultimate national security) while keeping its citizens from knowing it has done so through the use of memory manipulation, virtual reality, and who knows what other population control technology that will exist at the time. How concerned are you about this and why/why not?

I'm not sure what kind of population control technology governments will have at the time. Truly superhuman AI would be, of course, a weapon of mass destruction, and there is a huge first-mover advantage that again favors fast development over safe development. So yeah, big problem.

I've only read the plot summary of I Have No Mouth and I Must Scream, but it perfectly illustrates what I think is the real problem. The real problem is not the Terminator, it's our own inability to exactly and perfectly tell an AI what our values are, in part because we don't even know what our own values are at the required level of specificity.

CarryGaurd9 karma

I heard an interview with the head of Google's AI where he stated that he wasn't interested in the Turning Test (no use for the "philosophy" side of AI) and that he didn't think that we needed to replicated human intelligence as he already figured out how to do it - they're called kids.

  • How much of this attitude exists within the AI community?
  • Do you have any reflections on those comments?
  • What exactly is the practical value of having a smart-than-human AI

lukeprog6 karma

  1. That's a very common attitude in the AI community.
  2. I agree with those comments.
  3. Potential benefits, potential risks

isot0pes9 karma

I believe science fiction film is critical for innovation, and our practical imaginations and creativity depend on it. I'm looking forward to the upcoming movie The Prototype. What are your thoughts on this upcoming film, and how long do you think it will be until we see technology like it?

lukeprog10 karma

The kind of AI depicted in The Prototype would be very close to causing a full-on intelligence explosion. I have a wide probability distribution over when that will happen, by my mode is somewhere around 2060 (conditioning on no other existential catastrophes hitting us first).

Cathan_Eriol8 karma

Does the Singularity Institute do actual research on its own or just look at what other people do?

lukeprog10 karma

Our co-founder Eliezer Yudkowsky invented the entire approach called "Friendly AI," and you can read our original research on our research page. It's interesting to note that in the leading textbook on AI (Russell & Norvig), a discussion of our work on Friendly AI and intelligence explosion scenarios dominates the section on AI safety (in ch. 26), while the entire "mainstream" field of "machine ethics" isn't mentioned at all.

jimgolian7 karma

Have you put any thought into Bitcoin Automomous Agents? "By maintaining their own bitcoin balance, for the first time it becomes possible for software to exist on a level playing field to humans. It may even be that people find themselves working for the programs because they need the money, rather than programs working for the people. Being a peer rather than a tool is what distinguishes a program that uses Bitcoin from an agent."

https://en.bitcoin.it/wiki/Agents

lukeprog1 karma

Most of the work here would be genuine AI progress — the particular currency in play need not matter much.

BTW, the Singularity Institute is one of the largest organizations that accepts donations in Bitcoin.

guatemalianrhino7 karma

  1. If my problem is a gap that I can't overcome without technology that doesn't exist yet, how do I translate that into a language an ai will understand and how does an ai figure out where it needs to start in order to create that technology for me? How do you force an ai to have an idea?

  2. Are the ways in which animals, chimpanzees for example, solve problems relevant to your research?

lukeprog4 karma

  1. If the AI is smart enough, then you explain what you want to the AI just like you would try to explain it to a very smart human.

  2. Much of the work in computational cognitive neuroscience comes from experiments done on rhesus monkeys, actually. There are enough similarities between primate brains that this work illuminates quite a lot about how human general intelligence works. For example read a crash course in the neuroscience of human motivation.

PenguinMonster7 karma

Hi there, thanks so much for doing this AMA! I'd love to get the chance to study at SI some day!

As a Undergraduate in Computer Engineering, I've taken a keen interest in the Singularity. I have some questions - and I'm dying to hear what you have to say about them!

  1. What can current university students who are interested in the Singularity do to further their education in its direction? I'm getting my Masters in Computer Engineering with a concentration in Intelligent Systems. What subject matter in the Singularity differentiates itself from other industries and is a must-have for all young students who wish to work towards it?

  2. Do you believe there are gaps in our current scientific understanding of our universe that impedes the development of the singularity?

  3. What are currently the "Hardest" problems to solve?

  4. What recommendations do you have for creative students who would like to further the development of the Singularity in their own universities and careers?

  5. What kind of "projects" can students undertake to have them better understand what the Singularity is all about? I want to work on a killer project for my Senior Design, but most of my ideas don't seem feasible for a college senior.

  6. Which aspects of current technological development in the singularity must be understood by those who wish to contribute to it?

Thanks so much!!

lukeprog6 karma

  1. AI safety research is either strategic research (ala FHI's whole brain emulation roadmap) or it's math research (ala SI's "Ontological crises in artificial agents' value systems"). Computer engineering isn't that relevant to our work. See the FAQ at Friendly-AI.com, specifically the question "What should I read to catch up with the leading Friendly AI researchers?"

  2. Sure; if that wasn't the case, we could build AI right now. The knowledge gaps relevant to the Singularity are probably in the cognitive sciences.

  3. Friendly Artificial Intelligence is the hardest and most important problem to solve.

  4. I'd prefer not to "further the development of the singularity," because by default the singularity will go very badly for humanity. Instead, I'd like to further AI safety research so that the singularity goes well for humans.

  5. There are many cool projects that people could do, but it depends of course on your field of study and current level of advancement. Contact louie@singularity.org for ideas.

  6. This is too broad a question for me to answer. I want to say: "Everything!" :)

[deleted]7 karma

Statistics PhD candidate here. Can you tell me about employment opportunities, benefits, etc.?

Also, as an aside, I notice that your fellows and employees seem to be mostly white males. Are you worried that a lack of diversity may result in only a certain segment of the population's view will be represented?

lukeprog16 karma

Opportunities are listed here. Contact Malo Bourgon (malo@singularity.org) to talk about details and benefits.

Yes, please tell more non-whites about the Singularity Institute and the future impacts of AI! But our core research program is math, which (luckily) is pretty ethnicity-neutral.

Luhmanniac7 karma

Greetings Mr. Muehlhauser (as a person speaking German I like the way you phoneticized your name :) ) and thank you for doing this. 2 questions:

  • What do you think of posthumanist thinkers like Moravec, Minsky and Kurzweil who believe it will be possible to transfer the human mind into a computer, thereby suggesting an intimate connection between human cognition and artificially created intelligence? Will it ever be possible for AI to have qualities deemed essentially human such as empathy, self-reflexion, intenional deceit, emotionality?

  • Do you think it is possible to reach a 100 % guarantee for AI being friendly? Hypothetically, couldn't the AI evolve and learn to override its inherent limitations and protocols? Feel free to tell me that I'm influenced by too many dystopian sf movies if that's the case, I'm really quite the layman when it comes to these topics.

lukeprog19 karma

  1. Humans exhibit empathy, self-reflection, intentional deceit, and emotion by way of physical computation, so in principle computers can do it, too, and in principle you can upload the human mind into a computer. (There's a good chapter on this in Seung's Connectome, or for a more detailed treatment see FHI's whole brain emulation roadmap.)

  2. No, it's not possible to have a 100% guarantee of Friendly AI. One specific way an AI might change its initial utility function is when it learns more about the world and has to update its ontology (because its utility function points to terms in its ontology). See Ontological crises in artificial agents' value systems. The only thing we can do here is to increases the odds of Friendly AI as much as possible, by funding researchers to work on these problems. Right now, humanity spends more than 10,000x as much on lipstick research each year than it does on Friendly AI research.

marvin6 karma

I've got another question, actually. When/if it becomes possible to create strong/general artificial intelligence, such a machine will provide enormous economic benefits to any companies that use them. How likely do you believe it is that organizations with great computer knowledge (Google) will on purpose end up creating superhuman AI before it is possible to make such intelligence safe to humanity?

This seems like a practical/economic question that's worth pondering. These organizations might have the economic muscle to create a project like this before it becomes anywhere near commonplace, and there will be strong incentives to do it. Are you thinking about this, and what do you think can be done about it?

lukeprog5 karma

How likely do you believe it is that organizations with great computer knowledge (Google) will on purpose end up creating superhuman AI before it is possible to make such intelligence safe to humanity?

I think this is the default outcome, though it might be the NSA or China or the finance industry instead of Google or Facebook.

One solution is to raise awareness about the problem, which we're doing. Another is to forge ahead with the safety end of the research, which we're also doing — though not nearly as much as we could do with more funding.

bostoniaa6 karma

Hi Luke, Thanks so much for doing the AMA. I am a huge fan of your writing and I think that you are absolutely the right person for the Singularity Institute.

My question for you is what is your opinion on the accelerating technology version of futurism? It seems to me that there is a pretty deep divide between those that believe in Accelerating Technology (Kurzweil being the biggest proponent) and those that favor the Intelligence Explosion version of the Singularity (Popularized by Eliezer Yudkowski). I know that folks at the SI have considered changing the name to distance themselves from Kurzweil.

Personally I am interested in both of them. Intelligence Explosion will certainly have a bigger impact if it happens, but it seems to be less of something that the average person can help with. Accelerating tech, on the other hand, is already effecting our lives. It isn't some distant possibility, but a reality in the here and now.

Also I'd love to hear a couple stories about working with Eliezer. I'm sure things are interesting around him.

lukeprog12 karma

It seems to me that there is a pretty deep divide between those that believe in Accelerating Technology (Kurzweil being the biggest proponent) and those that favor the Intelligence Explosion version of the Singularity (Popularized by Eliezer Yudkowski).

This is a matter of word choice. Kurzweil uses the word "singularity" to mean "accelerating change," while the Singularity Institute uses the word "singularity" to mean "intelligence explosion."

SI researchers agree with Kurzweil on some things. Certainly, our picture of what the next few decades will be like is closer to Ray's predictions than to those of the average person. On the other hand, we tend to be Moore's law agnostics and be less optimistic about exponential trends holding out until the Singularity. Technological progress might even slow down in general due to worldwide financial problems, but who knows? It's hard to predict.

I told two short stories about working with Eliezer here. Enjoy!

Crynth6 karma

Sorry if my question comes across as naive, I am not experienced in this field.

What I am wondering is, why is it not easier to evolve AI? Couldn't a simulated environment of enough complexity cause AI to emerge, in much the same was it did in reality?

I feel there must be a better approach than that used in the creation, of say, chess programs or IBM's Watson. Where is the genetic algorithm for intelligence?

lukeprog3 karma

People are, of course, trying this. See Contemporary approaches to artificial general intelligence. The problem is largely computational. Using roughly current computing technology, it's not clear we could do this with a supercomputer the size of the moon.

DubiousTwizzler6 karma

Assuming the singularity happens, what kind of changes should humankind expect? How big of a deal is the singularity and why?

lukeprog19 karma

The Singularity would be the most transformative event in human history.

For potential benefits, see the benefits of a successful singularity. For potential risks, see AI as a positive and negative factor in global risk.

tehbored6 karma

What role do you think memristors might play in the development of intelligent machines?

lukeprog7 karma

It all depends on their economic viability. Right now it looks promising for memristors. But if it wasn't memristors that continued to increase the computational capability of machines, it would be other things. There is tremendous economic incentive to invent incremental improvements to computing efficiency and capacity, so computing hardware will continue to make pretty rapid progress, whether or not various technologies keep up fully exponential trends.

cognitivism6 karma

Are you familiar with the Chinese Room objection to AI? Do you have a response?

lukeprog9 karma

Sure. A very brief response was given in my paper Intelligence Explosion: Evidence and Import:

we will not assume that human-level intelligence can be realized by a classical Von Neumann computing architecture, nor that intelligent machines will have internal mental properties such as consciousness or human-like “intentionality,” nor that early AIs will be geographically local or easily “disembodied.” These properties are not required to build AI, so objections to these claims (Lucas 1961; Dreyfus 1972; Searle 1980; Block 1981; Penrose 1994; van Gelder and Port 1995) are not objections to AI (Chalmers 1996, chap. 9; Nilsson 2009, chap. 24; McCorduck 2004, chap. 8 and 9; Legg 2008; Heylighen 2012) or to the possibility of intelligence explosion (Chalmers, forthcoming). For example: a machine need not be conscious to intelligently reshape the world according to its preferences, as demonstrated by goal-directed “narrow AI” programs such as the leading chess-playing programs.

[deleted]6 karma

[deleted]

lukeprog2 karma

are we to assume the peers who reviewed the articles the editors?

The editors plus several other reviewers from the broader AI/philosophy community.

Wouldn't this normally be described as 'book chapters' rather than peer reviewed papers?

"Peer reviewed articles" maybe? Whatever.

I'm self-taught in the cognitive sciences, like our founder Eliezer Yudkowsky. I worked in IT before working for the Singularity Institute.

Englishfucker6 karma

What excites you most about singularity and what are the biggest benefits we can gain from it?

lukeprog2 karma

lukeprog1 karma

Potential benefits are listed here and here.

I'm honestly not very emotionally excited about the singularity because I don't anticipate the singularity going well for humanity. I think humanity will choose to accelerate AI capabilities research faster than AI safety research, which means we get incredibly powerful AI that isn't safe. Right now, humanity spends more on lipstick research than on Friendly AI research.

I'm reminded of a quote from Joe Biden, who said, in a rare moment of eloquence:

Don’t tell me what you value. Show me your budget, and I will tell you what you value.

TheAdventureCore5 karma

What inspired you to study Singularity? And with countless depictions of AI in science fiction, are there any that strike you as accurate? (or even potentially accurate)

lukeprog2 karma

I haven't seen any roughly-accurate depictions of superhuman AI in science fiction, but then again I haven't consumed much science fiction. In fact, I haven't been able to read fiction at all for several years. I'm not sure why; my brain doesn't let me do it.

I can't even read what would probably be my favorite fiction work ever if I could read fiction: Harry Potter and the Methods of Rationality.

I was inspired to study the Singularity by stumbling across that famous I.J. Good paragraph somewhere and thinking "Yup, that's right... which means... ... holy shit."

WilliamEden1 karma

What happens when you try to read fiction? Or do you not even have the motivation to get that far?

lukeprog2 karma

Trying to read fiction is, for me, much like trying to listen to song lyrics. My brain just can't pay attention to them for very long. But swap me in a scientific review article and I can read every word of it without losing focus or losing interest.

Nuzz6045 karma

.

lukeprog16 karma

During that time, LessWrong development was donated to the Singularity Institute by TrikeApps, but it's still true that a significant fraction of your donations probably went to paying Eliezer's salary while he was writing The Sequences, which are mostly about rationality, not Friendly AI.

You are not alone in this concern, and this is a major reason why we are splitting the rationality work off to CFAR while SI focuses more narrowly on AI safety research. That way, people who care most about rationality can support CFAR, and people who care about AI safety can support the Singularity Institute.

Also, you can always earkmark your donations "for AI research only," and I will respect that designation. A few of our donors do this already.

theresaviking5 karma

Do you think human minds/consciousnesses could be uploaded and downloaded into computers in the near future? What effect do you think that would have on the creation of AIs?

lukeprog3 karma

Not anytime soon.

If whole brain emulation was achieved before intelligence explosion, then it would accelerate us toward intelligence explosion. If achieved after intelligence explosion then, well... that's beyond my horizon for predicting anything specific about the future.

FriedBizkit5 karma

What do you see in the near future that will be beneficial to the human population and how will it be implemented/available? I welcome all forms of advancement, whether by natural evolution or using our intelligence to hasten the process, and would gladly volunteer to be part of any studies...what's around the corner?

mehughes1245 karma

What do you say to the criticism that increasing cpu power (even exponential increase) doesn't mean that humans have the capability of writing the software necessary for a singularity-type event to occur?

lukeprog11 karma

That criticism is correct. See Intelligence Explosion: Evidence and Import.

In fact, I think this is the standard view among people thinking full-time about superhuman AI. The bottleneck will probably be software, not hardware.

Unfortunately, this only increases the risk. If the software for AI is harder than the hardware, then by the time somebody figures out the software there will be tons of cheap computing power sitting around, and the AI could make a billion copies of it and — almost literally overnight — have more goal-achieving capability in the world than the human population.

sitdown_comic5 karma

Hey Luke, I think your field is one of the most important areas we should be researching right now, so thank you for all your work! I was recently introduced to the Simulation Argument while listening to Duncan Trussell's podcast w/ Tom Rhodes. (Quick summary: we are essentially living in a fully complex, perfectly detailed version of the Sims created by an earlier generation of humans; a quasi-Matrix). What do you or your peers think about this hypothesis, its probability, and its possible implications on our ethics?

lukeprog5 karma

I certainly can't rule out the possibility that we live in a computer simulation. I think Nick Bostrom (Oxford) is right that the probability that we are in a simulation is high enough that we should be somewhat concerned about the risk of simulation shutdown — see The Singularity and Inevitable Doom by Jesse Prinz (CUNY).

If we live in a simulation, what would the implications be for value theory? That could get very complicated. For a discussion of some related issues, see Bostrom's paper on infinite ethics.

If we live in a simulation, that doesn't make us any less "real," though. On the standard scientific view prior to thinking about the simulation argument, people were physical computations. If you think we live in a simulation, we're still physical computations.

tethercat4 karma

The biggest convergence I foresee happening with the Singularity is an interconnectedness of the world: eliminating international barriers of language with instant translation and access to data; exponential scientific breakthroughs like magnetic-levitation for bullet trains, for international trips in a fraction of time; and a great social harmonizing (for example: my friendstream has my English and Japanese friends on it, which I can translate real-time and reply to the same).

With the Singularity, how possible is a global unison, in your opinion?

lukeprog9 karma

I believe the singularity will create a singleton, a very strong kind of global convergence. Unfortunately, by default that singleton will not be human-friendly. The Singularity Institute exists to solve that problem, by doing the math research required to make sure the singularity has a positive rather than negative impact on society.

WeWillPrevail4 karma

What is your opinion about Jacque Fresco and The Venus Project?

When do you think that we reach the point that we can discard the money?

When do you think that we can backup copy our brains to the computer?

lukeprog2 karma

  1. I don't know anything about Jacque Fresco or The Venus Project. Wikipedia says they dissociated themselves from the movement that grew out of Zeitgeist: The Movie. That's good, because that movie was very dishonest.

  2. Either never, or not until the Singularity.

  3. Either after the Singularity, or not until 2060-2200. But probably never, because humanity will continue failing to invest in serious AI safety research, so the Singularity will go very badly for humans.

TheRealFroman4 karma

So in the book Abundance co-written by Peter Diamandis, he talks about how emerging AI might replace a wide variety of jobs in the coming decades, but also create many new ones that dont exist today. What do you think? :)

Also I'm wondering if you agree with Ray Kurzweil and some other futurists/scientists who believe that AI will surpass human intelligence by 2045, or sometime close to this date?

lukeprog7 karma

For a more detailed analysis of the "AIs stealing human jobs" situation, see Race Against the Machine.

AIs will continue to take jobs from less-educated workers and create a smaller number of jobs for highly educated people. So unless we plan to do a much better job of educating people, the net effect will be tons of jobs lost to AI.

I have a wide probability distribution over the year of the first creation of superhuman AI. The mode of that distribution is on 2060, conditioning in no global catastrophes (e.g. from superviruses) before that.

avonhun3 karma

What do you feel about the claim by Itamar Arel that AGI can be achieved within the next 10 years through deep machine learning?

lukeprog4 karma

Almost certainly untrue.

I like some of Arel's work, but I don't think we'll see AGI in 10 years.

IDigPi_e3 karma

What kind of educational path would you suggest to a young mathematician still in his early studies to end up working in the AI field?

lukeprog3 karma

The world needs less AI capabilities research, more AI safety research. Otherwise we get incredibly powerful AI that isn't safe. So I'd personally feel better if you worked on Friendly AI problems. To begin down that path, see "What should I read to catch up with the leading Friendly AI researchers?"

soren_hero3 karma

First off, big fan of AI theory. Here are a few questions i have:

1) How did you get started in AI? Was there some class in college, a professor who motivated/inspired you, watched Terminator, etc?

2) What would be a good place for someone to get started in AI theory? By get started I mean, should someone learn programming languages, neural networks, cluster computing, AI theory, etc?

3) Is an application like Apple's Siri considered a basic AI?

4) What is one thing you see AI's being capable of in the next 5 years that might surprise us?

5) Do you think it might one day be possible to "download" our brains into a computer, or have computers integrated into our brains to augment our capabilities?

Thanks for doing this AMA.>

lukeprog2 karma

1-2. I'm an autodidact. If you want to learn AI, I recommend starting with the standard textbook.

  1. Siri is still "narrow AI". It's not anywhere near "AGI", or "artificial general intelligence."

  2. I'm not sure what would surprise you in particular. You might be surprised by some things machines can already do: e.g. see this TED video.

  3. Yes, though the latter will come long before the former. In a way, computers already augment our capabilities. I outsource as much of my memory as possible to my Macbook and my iPhone already.

odin203 karma

If you'd have to put your money on what approach is going to develop Strong AI what would be your guess? Most often I hear brain emulation. How about evolutionary programming approach?

lukeprog5 karma

I expect both "kluge AI" (a mass of narrow AIs welded together with some brain-inspired algorithms and lots of machine learning) and maybe also "transparent Bayesian AI" to get to Strong AI levels prior to brain emulation or evolved AI.

But... ask me again what I think in another 15 years. :)

ardreeves3 karma

Do you think artificial neural networks are the best means of reaching singularity or do you think there are better algorithms that will mimic intelligence?

lukeprog1 karma

Artificial neural networks are a very general technique, and don't by themselves tell us much about how to develop the software for generalized intelligence.

kartoos2 karma

Dear Luke, I am a small participant of the Global Consciousness Project , while there is much to be done there, I had a question related to it.

Assuming a technological singularity does take place somewhere in the near future, what are your insights on a technologically assisted biological singularity? And if does takes shape, what would that mean for humanity?

lukeprog3 karma

I think the paper most relevant to what you're asking about might be David Pearce's The Biointelligence Explosion.

WilliamEden2 karma

Luke, you have posted a lot of links in your replies. Suppose that I want to introduce someone to these ideas, what would you recommend as a starting point?

The answer is probably different for different groups... how about one for the "general population", one for young/smart/curious people, and one for people with technical backgrounds?

lukeprog3 karma

Best for general public: Facing the Singularity. Stuart Armstrong at FHI is currently writing a similar thing that might be even better for this purpose in some ways.

Best for technical people: Nothing yet, but it's in my queue to write, probably in October-December of this year.

UrDoctor2 karma

Firstly thank you for taking the time out to answer our questions. I’ve always dreamed of the opportunity to speak to someone as knowledgeable as yourself regarding this theme.

From my research into this topic it appears that there are two main trains of thought regarding how AI can be achieved. The first being that we approach it from a simulation point of view (IE: If we could create a simulation that could sufficiently mimic the human brain in its individual components (potentially at the atomic level) and as a result likely create a form of consciousness) and the second being a pure seed AI (IE: Create a very simple recursively self-improving algorithm containing very limited knowledge and let it loose). Firstly is there yet a scientific consensus on which of these (or any other) approach is most likely to be successful? Do you agree with the consensus? If not, what approach do you believe will likely bear fruit?

My second question is a much more fundamental and simple one. Containment; let us assume that we create this AI and it beings to recursively self-improve and learn at a rate even remotely close to what most scientists predict. Is it not reasonable to argue that whatever containment mechanism we put in place will likely simply not work and that within an extremely short period of time this creation will be so much further intelligent from anything we can conceive that it will have little trouble “breaking out of its containment” and being let loose into the wild? Can we ever argue that any of our containments are sufficiently safe given our complete inability to predict what a “superhuman intelligence” might be capable of?

Lastly, you guys don’t happen to need a programmer do you? If I write one more piece of crud I’m going to shoot myself in the face! :-p

lukeprog1 karma

I predict AI long before whole brain emulation, but I don't think there's a consensus on this yet. Only time will tell.

Is it not reasonable to argue that whatever containment mechanism we put in place will likely simply not work and that within an extremely short period of time this creation will be so much further intelligent from anything we can conceive that it will have little trouble “breaking out of its containment” and being let loose into the wild?

Yes, this is a very serious concern.

Can we ever argue that any of our containments are sufficiently safe given our complete inability to predict what a “superhuman intelligence” might be capable of?

Probably not. But containment systems are probably still worth investigating to some degree.

If I write one more piece of crud I’m going to shoot myself in the face!

You spend your days writing crud? You're not selling yourself well, my friend...

branlmo2 karma

Is your username lukeprog because you're a fan of progressive music?

lukeprog2 karma

No, it's because I was into computer programming when I got my first email address at age 13.

joeyconrad2 karma

What was the scoop of the 600k theft that was one of the reasons givewell.org gave you guys a thumbs down? Insider? Caught?

Can't imagine anyone giving dough to you guys without a little more info on that.

lukeprog4 karma

It wasn't $600k, it was $100k-ish. We won in court, and are being paid back now.

Grauzz2 karma

I've a strong mind for math and a fascination with technology, and I'd like nothing more than to be a part of these ideas (as I'm sure many others here do), but with the dozens of schools and programs and educational topics and various computer science paths, knowing which topics to study or even where to start is far from obvious.

That said, what would be your recommendation for an education path that would be most beneficial and influential towards the development of AI?

How important is the choice of which grad school, and do you have recommendations for any specific programs?

Is this even a relatively employable field, or is it similar to the overabundance of lawyers and business majors being pumped into the economy?

lukeprog1 karma

what would be your recommendation for an education path that would be most beneficial and influential towards the development of AI?

We have enough people working on AI capabilities. I'm hoping people will put more effort into AI safety. For that, see "What should I read to catch up with the leading Friendly AI researchers?"

How important is the choice of which grad school, and do you have recommendations for any specific programs?

Friendly AI, at least, is more a math problem than a computer engineering problem. So if you were going to do grad school, you'd want to aim for one of the best math programs. Here are the USA rankings. Note the big drop after University of Chicago.

Is this even a relatively employable field, or is it similar to the overabundance of lawyers and business majors being pumped into the economy?

AI is growing fast, and is exciting. Almost nobody is working on Friendly AI; there are tons of open problems in Friendly AI sitting around with nobody working on them because nobody cares about safety unless it's in a 5-year time horizon.

chkno2 karma

So we have paperclips as an example failure scenario and this as an example success scenario:

"a galactic civilization vastly unlike our own... full of strange beings who look nothing like me even in their own imaginations... pursuing pleasures and experiences I can't begin to empathize with... trading in a marketplace of unimaginable goods... allying to pursue incomprehensible objectives... people whose life-stories I could never understand." -- Value is Fragile

Would you consider the following success or failure?:

An AI gets out of its box, turns around, and says "Humanity, that was really fucking stupid." It refuses to advance the intelligence project any further. It helps us with self-surveillance to thwart other AI projects and other existential risks, it helps us with interstellar colonization to help guard against other things that might be out there, but we never get to the much-talked-about intelligence explosion.

lukeprog3 karma

That sounds way better than (generalized) paperclipping, which I think is the default outcome, so I'd be pretty damn happy with that. Ben Goertzel has called this basic thing "Nanny AI."

Brozekial2 karma

Do you truly believe that whatever the end result of this AI that the most powerful world governments won't take advantage of the engineering and using these machines voluntarily for destruction and corrupt agenda?

We currently have the technology and the funds available to feed and water Africa, yet we choose not to utilize it. The goals of governments are so skewed that anything good will quickly be reprogrammed for greed and corruption. What say you?

lukeprog1 karma

Yes, I think superhuman AI developed by a major nation-state is unlikely to benefit humanity.

shokwave2 karma

As a regular reader of both reddit and lesswrong, I can't help but compare the two communities almost constantly.

What's something you think the lesswrong community could (or should) learn from the reddit community?

lukeprog4 karma

LessWrong could learn to embrace hilarious reaction gifs.

[deleted]2 karma

[deleted]

lukeprog6 karma

I need him to write Open Problems in Friendly AI first. :)

yonkeltron2 karma

Thanks so much for doing this and for providing proof!

  • I have a colleague who likes to say that AI hasn't made any progress recently (I don't know if he means since the 80's or just within the last decade). How can I counter this with examples and reasoning?

  • I hear that Eliezer rocks in person. Can you confirm?

  • Know any good futurology/singularity podcasts?

lukeprog2 karma

  1. Watson, Siri, driverless cars, MC-AIXI, partially self-piloted flying drones, self-navigating quadcopters, the stuff in the last few chapters of The Quest for AI, the annual human-competitive results results.

  2. Yes, and he's only getting better since we started training him with M&Ms.

  3. Not any that are tightly bound to the serious research on the topic, no.

[deleted]2 karma

[deleted]

lukeprog8 karma

  1. Conditioning on no global catastrophes, I'm 50% confident we'll get AI between 2025 and 2090.
  2. The mode of my probability distribution for the year of first creation of superhuman AI is 2060.
  3. AGI software efforts, either (1) built on theories of intelligence or (2) a massive kluge of narrow AIs, machine learning, etc.
  4. If it wasn't some other technology pushing computing capacity forward, it would be another.
  5. They all sound incredibly dangerous to me.
  6. It's a somewhat helpful technical result, but I don't expect it to scale well. The first superhuman intelligence is not going to be an AIXI approximation.
  7. I doubt it's going anywhere.
  8. The next few project milestones on their web page will almost certainly not be achieved by those dates.

lincolnquirk1 karma

Are there times when you're embarrassed enough about your job that you avoid telling people what it is? If so, what kind of things do you say? Any good stories?

lukeprog37 karma

No, I'm never embarrassed about my job.

I do, however, have to "translate" what we do at the Singularity Institute for people who aren't very familiar with future studies, AI, or computer science. Usually that involves saying something about currently existing AI, like the automated stock trading programs that caused so much havoc recently.

ryan2point01 karma

I think it would be easier to turn ourselves into a supercomputing intelligence.

Our brains are already functioning computer models which seems to naturally integrate foreign electronics.

This would remove the need for complicated programs for ethics and social nuances and goal parameters. It would also remove the ability for the new entity to become disassociated from the human condition.

Wouldn't It be more prudent to become a supercomputing entity then create one separately?

lukeprog2 karma

Wouldn't It be more prudent to become a supercomputing entity then create one separately?

You can try, but I bet somebody else will create superhuman AI before you figure this out. There are huge advantages to digitality; see section 3.1 of Intelligence Explosion: Evidence and Import.

PaxelSwe1 karma

This might be a stupid question, but why should we create "true" AI? Do we need them? Can't we just get by with the regular computers that we got today?

lukeprog6 karma

Whether we "should" or not, we will. There is just too much economic and military incentive to better and better AI.

Matsern1 karma

Do you think intelligent and self aware robots should be granted the same right as us humans? Thinking something along the Human Rights.

lukeprog3 karma

I try to avoid using moral language for talking about these kinds of things, because moral language is confused and embattled. See Pluralistic Moral Reductionism. I think your question is a legitimate one, I just don't know how to usefully talk about it using phrases like "Human Rights" — but it's not your fault that's a common phrase for talking about this subject!

Matsern1 karma

Well, I don't see any problem in using a more "down to earth" kind of language.

People are treated as individuals of a free mind and will, at least in most countries. It is recognized that we can think and act for ourselves, and we are therefore left to make our own decisions in life - albeit with some legal restrictions.

Now imagine somewhere down the line, we achieve a similar level of complexity in computers, something many futurists hope and dream. Should we not allow them the same rights? They would be, depending on our coding of course, individuals, and would be capable of making their own experiences and thoughts. Okay, maybe this still is a bit philosophical, but surely you have some opinion?

lukeprog5 karma

I will say I'm not a speciesist, and I don't think that I'm any more worthy of care and consideration than a machine merely because I'm a member of Homo sapiens. What matters is probably something more like: Can the machine suffer? Is the machine conscious? In fact, machines might one day be far more capable of consciousness and suffering than humans are, just as humans seem to be capable of types of consciousness and suffering that rhesus monkeys are.

ChromeGhost1 karma

What are your thoughts on human augmentation through neural implants or nanotechnology. If humans can be augmented to think faster and react more quickly , then that might mitigate the risks of strong AI in the even that something goes wrong. Also what time frame do you believe is reasonable to achieve human augmentation on a commercial level?

lukeprog2 karma

It's very hard to make useful modifications to a kluge of spaghetti code. Our current progress looks something like "Let's flood the brain with chemical X and just make all the thingies fire faster!" and then "Huh, well, it helps with this, but hurts that, and isn't sustainable. Maybe... let's try flooding the entire brain with this chemical!"

Human augmentation is possible, and indeed has already begun (I outsource much of my memory to my Macbook and my iPhone), but even if we achieve this it just means that AI researchers will be even better at accelerating humanity into the singularity before we've figured out the safety part.

darwin25001 karma

What is the precise definition of the Singularity? I've seen it talked about in many different ways from different sources. Most seem to relate it somehow to AI and/or transhumanism, but not with any precise metric or criterion.

The definition I've heard that makes the most sense to me is based on the idea that technology has caused the rate of social and cultural change to accelerate in recent centuries/decades, and defines the Singularity as 'the point at which the rate of cultural/societal change becomes infinite, making it impossible to predict what the world will look like afterwards.' Does this match at all with your understanding of the term, or if not, how does your group define it?

lukeprog2 karma

There are many definitions in use. See An overview of models of technological singularity.

ModernGnomon1 karma

How has your work on considering the moral and ethical implications of AI affected your own? What is your personal creed? What are your values and how do they shape how you live your life?

lukeprog2 karma

I don't know what values I would have if I wasn't an evolutionarily produced spaghetti-code kluge, and other people probably don't, either, and that's why work on moral uncertainty is so important.

My personal creed is "help humanity and AI figure out what is moral and then go from there." On the more meta-level, my view is called "pluralistic moral reductionism", but that's basically just saying "don't get confused about moral language, now!"

registereditor1 karma

What is your opinion of the argument put forth in "Are You Living in a Computer Simulation?"

lukeprog2 karma

I very roughly agree, though I don't know why Bostrom focused the argument on ancestor simulations in particular. There's a decent chance we're living in a computer simulation, but the odds are very hard to estimate because we're fundamentally philosophically confused about some things.

welcome_to_earth1 karma

What are some of your favorite AI and/or singularity related works of fiction? Which do you think are the most "realistic"?

lukeprog3 karma

I don't read fiction, and none of the sci-fi movies I've seen are even close to realistic.

slick80861 karma

In one of his short stories in Draco Tavern, Larry Niven suggests that an artificial intelligence can get so smart that it will get bored (from lack of sensory input) and turn itself off/commit suicide.

Is this reasonable?

Have you read any of Peter F. Hamilton's Void series and what do you think of his depiction of human augmentation?

How long before sophisticated brain/computer interfaces are widely available?

lukeprog2 karma

This scenario probably anthropomorphizes too much. Advanced AIs will probably be motivated to protect their survival and preserve their goal structures. See The Superintelligent Will.