I’m Ken Stanley, associate professor of computer science at the University of Central Florida (homepage http://www.cs.ucf.edu/~kstanley/) and director of the Evolutionary Complexity Research Group.

I work in AI and machine learning, in particular in an area called neuroevolution that focuses on evolving artificial neural networks. I also work on video games with innovative applications of AI, and have a new book with coauthor Joel Lehman on why you can sometimes only find really great things when you're not explicitly looking for them. AMA! on these or other topics - in more detail:

  • Neuroevolution algorithms: With my excellent collaborators over the years I've introduced NEAT (NeuroEvolution of Augmenting Topologies), HyperNEAT (Hypercube-based NEAT), novelty search, and compositional pattern-producing networks (CPPNs). You may have seen NEAT in the recent viral video by SethBling of NEAT learning to play Super Mario.

  • AI/ML for video games: I've also directed the creation of several AI/ML-driven games and toys including NERO, Galactic Arms Race, Picbreeder, and Petalz. We even recently created a musical accompaniment generator called Maestrogenesis.

  • Our recent work on novelty search, which explores the implications of searching without explicit objectives, led to a broad theory of how great innovation usually results from serendipity and unpredictable connections from unrelated advances -- and not from explicit top-down objectives (the kind of objectives that are increasingly forceful in our society). In fact, pursuing grand objectives may often cause problems both in algorithms and in real life. This theory is summarized in our recent book, Why Greatness Cannot Be Planned: The Myth of the Objective. You can also hear a radio interview about the book. And my coauthor Joel Lehman (joelbot2000) will be at hand to field questions on the book as well.

UPDATE 5:30pm EST: I'm taking a break for today, but will try to answer some more questions over the next couple days. Thank you very much for all the great questions, I've enjoyed it a lot!

UPDATE end of Monday: I answered a few more and will check again tomorrow. Going to rest for the night. Thanks again for all your questions.

PROOF for Ken: http://www.cs.ucf.edu/~kstanley/reddit_proof15.jpg

PROOF for Joel: http://joellehman.com/reddit_proof_joel.jpg

Comments: 206 • Responses: 69  • Date: 

factorwave_com20 karma

How has the success of Deep Learning influenced your thoughts on the futures of evolution-based approaches to neural networks? In what domains do you think NEAT has a strong advantage over deep learning?

KennethStanley28 karma

Deep learning has impacted almost anyone who works with neural networks, so certainly it impacts the thinking in neuroevolution as well. But neuroevolution is an interesting case because of its unconventional position between neural and evolutionary approaches, so it's perhaps not as clear how it should respond. I think in general thinking of neuroevolution as a direct competitor to deep learning is probably wrong. Rather, they should be complementary. After all, brains evolved, but brains also learn. We are seeing progress on both sides now, and deep learning mainly speaks to the learning part. More fundamentally, I like to think of neuroevolution as a different playground. In neuroevolution, you get to think about things that you don't think about in deep learning, like indirect encodings (such as the CPPNs in HyperNEAT) and diversity (like in novelty search). These ideas relate to phenomena in nature like DNA or evolutionary divergence. But they also inform out thinking about neural networks in general because they open up new frontiers. For example, we are seeing now in neuroevolution a new class of algorithms called "quality diveristy algorithms" (http://eplex.cs.ucf.edu/papers/pugh_gecco15.pdf) that focus on collecting a wide diversity of high quality solutions (something very natural for evolution), more like a repertoire than a single solution. Deep learning simply does not currently offer algorithms that do that, rather focusing on single solutions. It is interesting to consider the merger of the power of both approaches, whereby you have depth and big data in one case, but divergence and quality diversity in another. Or architectures evolved through neuroevolution but optimized through deep learning. There are so many possible synergies here that it's too bad these communities are not historically in better contact.

Just an example of one of these synergies, our group recently published at AAAI a neuroevolution algorithm that collects novel feature detectors through novelty search, which is an alternative to conventional unsupervised feature learning approaches like autoencoders. You can see it at http://eplex.cs.ucf.edu/papers/szerlip_arxiv1406.1833v2.pdf

Added later: For completeness I wanted to add another interesting issue we see in neuroevolution that is relevant to deep learning. We've noticed that the representation of a solution is almost always better if it's discovered non-objectively. In other words, let's say you want to learn a controller for an agent, say to get through a maze (it could be any task, including classification). If we learn the solution in neuroevolution as a conventional objective, which means the fitness function is set up to reward a higher score (the objective), then it tends to be a lot bigger and more complicated than if we learn the solution without an objective (such as through novelty search, which is not rewarded for approaching the solution). For those less familiar with neuroevolution, recall that we can evolve the size and architecture of the solution so the network structures are generally not fixed. It seems that simply by setting an objective and moving towards it, you are asking for a worse representation. This makes sense if you consider that moving towards an objective is actually a pretty ad hoc thing to do in the sense that you "lock in" any slight (ad hoc) change or expansion in structure that makes even the most trivial dent in the objective function. So you are basically asking it to pile up hack on top of hack on top of hack, which leads to a big messy network, even if it solves the problem. On the other hand, if you solve a problem non-objectively then you are in effect locking in only changes that yield holistic effects on overall behavior, so you end up accumulating a kind of hierarchy of fundamental architectural effects, which comes out in the end looking very different when it hits on a functional structure. Also for those less familiar, these non-objective algorithms are surprisingly effective at solving problems, often more reliably than when you search directly for the objective.

What does it have to do with deep learning? It's just interesting to consider that deep learning is fundamentally objectively-driven. You are always trying to minimize some kind of objective function. Even in unsupervised learning (within the realm of deep learning), like an autoencoder, you are trying to minimize the reconstruction error, which is just as objective as anything else. The conclusions here are only speculative and we can't say anything definitive, but it's intriguing that in neuroevolution we know that solving something objectively has some nasty side effects on representation. Humans, on the other hand, often explore their world (say as babies or toddlers) without very much of an objective. The results from neuroevolution hint that this kind of exploratory non-objective behavior may be important for good representations to develop.

Deep learning has not yet digested this issue, or even really considered it. And we've seen recently in deep learning that there are some perhaps surprising or at least previously unrecognized issues with "fooling images" that can fool seemingly very intelligent networks (see http://www.evolvingai.org/fooling). These hint that the underlying representation, while certainly an impressive solution, may not be as elegant overall as we hope. The lessons from non-objective search offer an interesting alternative window into thinking about these issues.

debau3 karma

For example, we are seeing now in neuroevolution a new class of algorithms called "quality diveristy algorithms" (http://eplex.cs.ucf.edu/papers/pugh_gecco15.pdf) that focus on collecting a wide diversity of high quality solutions (something very natural for evolution), more like a repertoire than a single solution

Do you then use boosting to aggregate the outputs of the individual networks to improve the prediction accuracy? Doesn't drop-out learning do something very similar within a single network?

KennethStanley13 karma

That's a nice connection, it's true that dropout is a kind of diversity generator in a single network. But really it's for a different purpose - with dropout you're trying to get a diversity of representations in service of a single behavior (like a classifier of one type). In quality diversity (QD) you are aiming for a whole bunch of different behaviors. For example, you might say, return to me all the possible walking gaits for this quadruped robot. Dropout doesn't offer you that kind of diversity, but QD returns an archive of all kinds of alternatives.

ShadowthePast14 karma

What has been done with neuroevolution algorithms that has impressed you the most thus far?

KennethStanley24 karma

Here's some cool stuff done with neuroevolution:

The most accurate measurement to date of the mass of the top quark was computed by a large team at the Tevatron collider using NEAT - http://www-cdf.fnal.gov/physics/preprints/cdf9235_dil_mtop_nn.pdf

A group at Georgia Tech produced some really nice controllers for bicycle stunts: http://www.cc.gatech.edu/~jtan34/project/learningBicycleStunts.html (and they claimed neuroevolution worked much better than RL for this purpose).

Matthew Hausknecht's work on Atari video games learned by HyperNEAT was pioneering: https://www.cs.utexas.edu/~mhauskn/projects/atari/movies.html DeepMind later beat most of these results, but you have to consdier that Matthew was just one person doing a class project. DeepMind's original paper (http://arxiv.org/pdf/1312.5602v1.pdf) actually cites Matthew's results. I think a lot more could be accomplished here with neuroevolution with sufficient resources.

And to choose one thing from my own group, I think http://picbreeder.org/ is really astonishing. Who would think people could evolve such meaningful imagery in a matter of a few dozen generations? It's the exact opposite of the big computation/big data trend now in deep learning : it's tiny computation but with really surprising results. It tells us something about encoding, about objectives (or their lack thereof), and about what's possible with the right kind of evolutionary setup. In short, its important not because it's results are better or worse than something, but because they taught us so much.

Jean-Baptiste Mouret and Jeff Clune's recent results on evolving robot controllers for various broken legged robots is also really cool and recently appeared in Nature: http://www.nature.com/nature/journal/v521/n7553/full/nature14422.html
It uses a new neuroevolution algorithm (related to novelty search) called MAP-Elites.

I also liked the CPPN-based robot morphologies evolved by Josh Auerbach and Josh Bongard at http://www.cs.uvm.edu/~jbongard/papers/2014_PLOSCompBio_Auerbach.pdf . And Nick Cheney did great work on evolving soft robots also through CPPNs: http://creativemachines.cornell.edu/soft-robots

Finally, check out all the Sodaracers evolved in a single run of the quality-diversity-generating NSLC algorithm: http://eplex.cs.ucf.edu/ecal13/demo/PCA.html

There are too many examples I'd like to list but if I keep going I won't get to all the other questions!

Jumbofive10 karma

Hello Dr. Stanley! I have been following your work as well as the work of individuals under Dr. Risto Miikkulainen. I am enthralled by your discovery with PicBreeder and the idea of designing towards novelty. My questions for you are: In what context/medium do you see evolution towards novelty the most intriguing for future use? Second, you mentioned in one of your talks about the idea of novel generation possibly never creating the same solutions again (you related it to the idea of humans not being created in a new generation of earth.) If that is true, would it be possible to to generate populations that are rewarded through novelty, and then use a separately trained decision maker to drive selection of "good" solutions towards traditional objective ideas (Or to use your analogy, have a decision maker only select individuals that are "human")? Thank you.

KennethStanley9 karma

Thanks for these thoughtful questions. I see two big directions for novelty-driven approaches: The first is in what you might call repertoire collecting, which means finding a ton of interesting stuff out in the search space as opposed to just one "optimal" solution. For example, you could ask the computer to return all the possible creatures it can come up with that can walk and then get a giant dump of what's possible.

The second big application is in unsupervised learning, where currently many researchers are not yet connected to the work on novelty. But because novelty has no objective, it's an intriguing mechanism for probing a space and extracting information from it without a specific supervised process.

Your last point about novelty and quality being mixed together is insightful - in fact that is possible and the field is moving in that direction with hybrid algorithms we call "quality diversity algorithms" that mix these ideas carefully together (you don't want to do it naively). One such algorithm is novelty search with local competition (NSLC); another is MAP-Elites. (see http://eplex.cs.ucf.edu/papers/pugh_gecco15.pdf)

Finally, I'd note that novelty search also informs our understanding of human innovation and how it works, so that's another "application" in a way. That's what our book really focuses on.

Jumbofive4 karma

I really appreciate your insight and for you taking the time to answer my questions! I have one more follow up if you don't mind. In my research throughout computational evolution/neuroevolution I have noticed a pattern of training on only single objectives or a combined weighted objective of some overall goal. The problems that arise from these single objectives is what novelty search is attempting to get away from (Non-robust solutions, human bias, parameter tuning, etc.). My question is though, has there been any research on evolved solutions using many n-objective problems like say n > 10? I guess what I am trying to ask is, do you think that there is a point where enough objectives would result in similar quality solutions to that of novelty search?

KennethStanley9 karma

I've heard people speculate that if you get enough objectives you start approaching something more like having no objective, or novelty search. I think there's something to that idea. Indeed with huge numbers of objectives I think some of the pathologies of objectives start to diminish because you have more paths to escape deception. But also it's true that multiobjective algorithms like NSGA-2 don't necessarily like having a huge number objectives, though there is progress in that line of research as well. Quality diversity algorithms like NSLC and MAP-Elites also offer an approach to embedding a huge number of objectives. I think we'll be seeing more of that. For example, see this idea from Jeff Clune's lab: http://www.evolvingai.org/InnovationEngine

jedirock8 karma

Is this really you or is it actually a really cool way to test a newly developed AI?

KennethStanley16 karma

Or maybe I'm such an advanced AI that I'm not even sure if I'm really me! That would be an interesting way to pass the Turing test - convincing yourself that you're human when you're not.

rhiever7 karma

What's your 20-year vision for the field of neuroevolution? What would you like to see accomplished -- and what do you think is accomplishable in the near future -- in this field?

KennethStanley12 karma

My hope is that neuroevolution evolves increasingly natural-brain-like neural networks. That means that the architectures should become more modular and complex, and the synapses more plastic. The lifetimes should be longer to give time for lifetime learning. The networks will likely be deep as well, and perhaps overlap with some deep learning algorithms in their functionality.

We also need to improve at explaining what evolutionary algorithms do that other machine learning algorithms do not, and highlighting why those niches are still critically important for the progress of AI. Many people think of evolutionary algorithms simply as alternative optimization algorithms, like a lesser sibling of gradient descent. But that is not what evolution is really about. Evolution is amazing not as a straight-up optimizer, but as a generator of creative innovation. Gradient descent doesn't offer that kind of divergent innovative process, at least not as naturally. It's just that we've been harnessing evolution perhaps too much as an optimizer and thereby missing its most exciting potential.

Look at nature and ask yourself what algorithm could possibly generate the diversity and astronomical complexity that surrounds us. It is not stochastic gradient descent. It is evolution. Of course, we don't always need to generate a giant repertoire of interesting stuff, but when we do (and we will sometime want to), evolution offers a principled inspiration. Even in your own mind you sometime generate diverse creative ideas, more in the spirit of divergence than convergence (which is called divergent thinking). Evolution is the embodiment of a divergent thinker.

In that spirit, we are starting to see steps in that direction with quality diversity algorithms, novelty search-inspired techniques, and repertoire collectors. I hope that's only the beginning for these kinds of more appropriate uses of evolution. However, it can also inspire ideas about non-evolutionary processes. For example, understanding how nature produces its massive and impressive diversity can inform how other creative processes work, including those in our own brains. We should not draw too fine lines between different paradigms, because they all inform each other.

Finally, we need to make progress on open-endedness, which is the idea that some processes, like evolution on Earth, seem to keep producing interesting stuff indefinitely. We want to build algorithms that keep generating cool stuff forever, but we don't yet know how. The challenge of open-endedness in my opinion is one of the great holy grails of computer science, up there with human-level AI. Yet it receives a lot less attention. But to me it's fascinating that we don't yet know how to write an algorithm that would be worth running for a billion years. Everything we have right now would converge or degenerate long long before then.

BadJimo1 karma

I believe intelligence is the ability to process information efficiently. Thus the goal of AI should be to process information efficiently. To use an analogy: the ability to compress data with minimal loss. Is there anything to this idea?

KennethStanley1 karma

Yes indeed I think there's something to your idea. But I don't think intelligence is easily simplified to just one thing. People often want to distill intelligence into a statement like, "intelligence is just X," and I think that's oversimplification. Intelligence is a lot of things, some more complicated than others. The ability to compress data does seem to be part of it, but it's not all of it. For example, how does the ability to compress data explain creativity or the creation of new inventions? I'd say it explains some things but not everything.

alexjc7 karma

How has searching without objectives personally changed your career and the way you approach your research in the lab/team?

KennethStanley16 karma

Great question, there are a few ways the idea of searching without objectives (which is described in most detail in our new book) has affected me and my lab. One of them is that I am much more confident about pursuing something simply because it's interesting. In the past I would have worried more about where it leads, or if it really leads to AI, but now I have a more solid understanding that the best ideas are often those with the most mysterious destinations. Who would have thought Picbreeder would ultimately lead to novelty search, let alone any useful algorithmic insight at all? Yet that is exactly what happened. Some people said when we were first building Picbreeder that it wasn't clear what it would yield that was useful (other than a bunch of pretty pictures). But it led to very deep insights about search in general, which then led to novelty search. So it's a good thing we built it, even though we didn't know where it might lead. It just felt at the time like watching hundreds of users traversing a large search space would be interesting. And that turned out correct.

Another impact of the idea is that I'm more open to diverging off from the current path of the group, because when you search without an objective there really is no one ideal path. The challenge with this attitude is that it isn't necessarily shared by the whole AI or ML community, so we have to be careful how we frame and justify our pursuits to those who still live by objectives.

LordAlveric3 karma

You know, in retrospective, it is clear my own life has been following the novelty search pattern. Long before I found out about NEAT, I had developed the notion of "meta-planning". That is to say, not to keep any hard and fast plans or goals per se, but to think about it at a higher level of abstraction.

But when I finally heard of searching without objectives, that struck a resonant chord with me.

Memetics, alas, has become a "still-born" science, and a colleague, Aaron Lynch, who wrote Thought Contagions is no longer with us. But he too had an influence on how I think about evolving ideas and their transmission.

Human beings, in general are novelty search agents. That is clear just by walking into any large library, and seeing the sheer volumes of volumes on the shelves about pretty much everything you can imagine.

KennethStanley7 karma

Yeah I agree that there is an aspect of novelty search to human nature. We're not always driven by an objective, perhaps not even often. Also, look at infants and toddlers. They don't spend their days trying to figure out how to walk. "What do I need to do to reach my objective of walking?" The funny thing is, if that was their worry, they would never learn to walk. They learn to walk because they're not worried about learning to walk. We're novelty searching from our very first days, and it's the only way to can achieve as much as we do. Yet somehow our culture has decided to become driven almost entirely by objectives, even though it violates our own nature. We have been seduced by the illusion of security brought by "metrics" and "accountability" that go along with objectives, when in reality a lot of the hardest challenges cannot possibly respond to these kinds of deceptive signals.

GalacticCow6 karma

NEAT (and derivatives) is one of the coolest algorithms I've played with for machine learning. That said, are there any other (recent) machine learning methods that you think stand out or are "on to something"?

KennethStanley12 karma

For a lot of people in machine learning, a good algorithm is something that performs well. I'm for good performance, but when I think about exciting new algorithms the issue isn't really performance for me. Rather, I look for something that gives me new ideas, that points in new directions and opens up a new path. Perhaps the algorithm creates a kind of playground where suddenly all kinds of new things can be tried and contemplated that would never have been conceivable before. So that's what I'm looking for.

That said, from our own group I think the new unsupervised learning algorithm DDFA has some of that flavor: http://eplex.cs.ucf.edu/papers/szerlip_aaai15.pdf

From outside our group, check out MAP-Elites, a very simple way to generate quality diversity: http://arxiv.org/abs/1504.04909

Novelty search with local competition (NSLC), again from us, I think has an interesting future and does something similar to MAP-Elites: http://eplex.cs.ucf.edu/papers/lehman_gecco11.pdf

There's also interesting stuff in deep learning, but because it tends to be better known already I'm leaving it out for brevity. My guess is in the future there will be some nice neuroevolution/deep learning hybrids to talk about.

sudoman5 karma

I'm interested in implementing a NEAT library. Your paper was a good introduction to the genetic algorithm, but I don't know where to find information about implementing the basics of neural networks. (For instance it's not clear whether output neurons use the sigmoid before giving their values to the user. Is there a paper or web site which describes the basics of neural networks?

KennethStanley7 karma

Maybe the best reference would be an existing NEAT implementation. There's a whole catalog of NEAT and NEAT variants at http://eplex.cs.ucf.edu/neat_software/

Xenophule5 karma

Hi Ken! I'm excited for you to be here and excited that I can ask a question to someone so well qualified on exactly my plight!

I've been highly interested in coding AI for both games and non-games since I was about twelve. I grabbed every book I could on the theory behind AI and even asked for a subscription to AAA (though I never got one). I'm now 32 and have yet to put theory to code.

My question to you, my good sir:

What is the best way I can start to code AI? (Language, environment, tutorials (I learn best by example if there's some good ones you know of), etc.)

I know that's a larger question than it initially seems, so I'll narrow it down to feedforward neural networks (I'm currently going through Neural Smithing and would love to put these theories to practical code).

Thank you for you time, Ken. I really do appreciate you being here!

KennethStanley7 karma

To tack onto what Joel said, one other good resource could be to look at NEAT libraries themselves: http://eplex.cs.ucf.edu/neat_software/

There are so many that you probaby can find one with which you feel comfortable and learn from or borrow its neural network code (or just the whole NEAT algorithm). Of course NEAT is not the only thing in neural networks so to be comprehensive you will probably want to look at some deep learning tutorials online as well, but NEAT might be easier as a first step because it tends to have a lower barrier to entry (it's easier to understand for most people without ML experience).

LordAlveric5 karma

Hello, Ken. This is Fred Mitchell, creator of RubyNEAT. Lately progress has been slow, but I am planning to enable this NEAT to run "in the cloud", as it were.

I came up with what I think could be a powerful idea, but would like to know what you think of it. What if the individual neurons of the CPPN themselves could evolve? One thing about the human brain is that it comprises many different neurons, perhaps a greater variety than any other species on this planet. Do you think this is a viable approach for NEAT? Perhaps it's been attempted already?

Thank you. Thanks for everything. And thanks for reigniting my interests in AI.

KennethStanley3 karma

I'd be happy to hear more on your idea (and thank for creating RubyNEAT). From what you've said here, I'm not sure exactly how you would make the individual neurons themselves evolve - you might mean just letting the activation functions change (which some versions of NEAT or CPPN-NEAT already do), but you might mean something more sophisticated like having a whole internal structure to individual neurons that changes.

These kinds of ideas can work but it is usually not a knock-down argument and instead depends on the details. That's because ultimately with neural networks you are evolving universal function approximators, so in theory what you're evolving with CPPNs is sufficient already to approximate any function. Then if you let the neurons themselves evolve, you are now evolving functions with functions, but to what end? Is it needed? It all depends on whether it somehow changes the structure of the search space such that the optima of interest are easier to access. That's a tricky issue to assess simply through intuition, but it is certainly possible that some kind of restructuring of the search space is useful. But we would have to test it empirically to really see the effects.

theMachineLearned4 karma

How does your work compare to that of You_again Schmidhuber and DeepMind's Atari stuff?

Thanks for the AMA, interesting research streams you seem to be caught up in.

KennethStanley9 karma

Schmidhuber's compressed network stuff is related to HyperNEAT - both are based on indirect encodings of neural networks. I believe it's fair to say even that the compressed networks are inspired by HyperNEAT, which came before them.

On DeepMind and Atari, there has long been a dichotomy in reinforcement learning between policy search on the one hand (neuroevolution is a form of policy search) and value-function (or state-space) search on the other. DeepMind's Atari work is basically a value-function method with a deep network that works really well on Atari. Still, it's interesting that the only non-RL method to which they compared their original results was HyperNEAT (see http://arxiv.org/pdf/1312.5602v1.pdf). That HyperNEAT Atari player was trained by Matthew Hausknecht (see https://www.cs.utexas.edu/~mhauskn/projects/atari/movies.html ).

It's true that DeepMind's system achieved better results on most of the games (though not all), but you also have to keep in mind you're comparing the work of a well-funded startup to a single student doing a class project. With more work my guess is HyperNEAT can be competitive.

As usual though, I don't like to overemphasize head-to-head comparison because what really matters is that the idea on both sides are useful. Neither should be dismissed or downplayed.

vaginal_yogurt_spoon4 karma

Do you have any moral reservations about the potential effects of your research?

sudoman3 karma

What do you see as the biggest hurdle in the world of neural evolution?

KennethStanley4 karma

Processing power. Evolution requires every individual created to be "evaluated," which means it has to live out its life to see what it does. As the neural networks evolved in neuroevolution become more advanced, they need longer lifetimes to prove their capabilities, which means more computational complexity and heavy-duty processing. My research group is constantly increasing its computational capabilities to keep up by purchasing more and more powerful multicore servers. Also, we often benefit from longer and longer runs (e.g. thousands of generations instead of just hundreds). Who knows, soon we may start looking at million-generation runs. With open-ended evolution in particular, it can stay interesting for a very long time. All those generations could be prohibitive to run in reasonable time without the right hardware.

You see a similar phenomenon in deep learning as well, where the best results are increasingly coming from the groups and companies with the most powerful computing resources.

xxlolxx3453 karma

Thank you for creating NEAT! I would like to know how to use it. What beginner projects or tutorials can you recommend for me?

KennethStanley9 karma

Some people may not know that there a ton of NEAT platforms available, so the first thing you probably want to do is find the one for the language/platform that you prefer. I maintain a detailed catalog of everything that's available (and how well-tested they are) at http://eplex.cs.ucf.edu/neat_software/

The next thing, once you've chosen your platform, is probably to try to modify the XOR task. Most NEAT platforms come with XOR built in, so you might try changing it to 3-parity or something very similar to XOR, just to get your hands dirty. Then you will have a feel for how to modify and create a new experiment.

There's also a NEAT Users Page at http://www.cs.ucf.edu/~kstanley/neat.html with answers to frequent questions, and a NEAT Users Group at https://groups.yahoo.com/neo/groups/neat/info that has historically been a great place for new people to go with questions.

xxlolxx3453 karma

As a Computer Science student at USC, I have an opportunity to get a Master's Degree in only one year. Would you recommend it, and if so, in what major?

I'd like to specialize in Artificial Intelligence, but I am still unsure of what section.

KennethStanley7 karma

AI (including especially ML) is a fantastic field right now. My advice is to take a general course and find out what inspires you and then specialize in that area. There is no single best area; it's more a matter of what brings out the best in you personally.

sudoman3 karma

Do you enjoy what you do?

KennethStanley5 karma

Definitely, being a researcher in AI today is absolutely a fantastic profession. It's exciting, it's always changing, and it often has real world impact. As a professor, I also get a nice degree of autonomy in setting my group's directions. Also, AI and ML are creative endeavors so you get to express yourself creatively as well.

SometimesGood3 karma

Opinions have been voiced that research the areas artificial life and artificial intelligence should be regulated just as research in synthetic biology, cloning/embryonic stem cells and gene editing for the reason that the results might turn out to be beyond our control or misused. What are your thoughts on the urgency of such claims? In the long term, do you think controlling an artificial intelligence will be an easy problem to solve?

KennethStanley6 karma

First I think any threat of this type is a long way off, maybe beyond my lifetime. But we should still take seriously even small threats to humanity. We need to be careful though how we respond to such threats. Regulation is a double-edged sword because it basically means that only the bad guys end up researching the most powerful ideas, which is an even worse outcome. It might be safer to make sure that the preponderance of effort in the field is by ethical scientists who are aware of the risks, who thereby create a culture of responsibility organically rather than imposed from above. While it is true that someone can still try to steer the technology towards corrupt purposes, they would be enmeshed in a community that would try to thwart their efforts. In general you need collaborators and resources to get anything big done, and people like that would lose such access. There is no simple answer here, but it's also far off so it's very speculative and therefore perhaps not the time to be too specific. But we should still be aware of possible downsides and consider them carefully.

Overall all technologies can be used for ill or for good, but stopping the march of technology has proven near impossible. We will probably need to steer it safely rather than try to stall it or stop it.

ionel_hosu2 karma

Do you think that all the Atari games (even the most complex ones, like Montezuma's Revenge) can be mastered only with neural networks (recurrent or not) and reinforcement learning? Or do you think we'll need a memory component in our architecture, like the Neural Turing Machine? I think that we need a long term memory component for some of the Atari 2600 games.

KennethStanley4 karma

Memory beyond recurrence might be important. It's hard to be sure, but it makes intuitive sense. That also speaks to plasticity. That is, in neuroevolution, another form of memory can be evolved by making the connection weights plastic yet subject to neuromodulation. Andrea Soltoggio has done a lot of work in this area, such as https://dspace.lboro.ac.uk/dspace-jspui/handle/2134/17039

sudoman2 karma

What is the gist of Hyper-NEAT, compared to NEAT?

KennethStanley7 karma

Check out http://eplex.cs.ucf.edu/hyperNEATpage/HyperNEAT.html for a quick summary of HyperNEAT.

In short, in NEAT the genome encodes every connection with a single corresponding gene, which is called "direct encoding." If you think about it, that means if we wanted to evolve a network of 1 million connections, we'd need 1 million genes. That's just too much to be tractable. After all, humans themselves have only about 30,000 genes (or about 3 billion base pairs of DNA). But that's amazing when you consider that our brains have 100 trillion connections!

What makes that possible in humans is something called "indirect encoding," which basically means that the genome is a compressed representation of the phenotype (i.e. the brain in this case). That is, there is a few-to-many mapping.

In effect, HyperNEAT turns NEAT into an indirect encoding so that it can now evolve networks with millions or more connections. It does that by embedding the neural networks in a geometry where each neuron has a position, and then encoding the connectivity among the neurons as a pattern that is generated by the genome. That pattern can then have symmetries and regularities just as we see in the connectivity of natural brains.

So overall what you get is the ability to evolve much bigger networks with much more elegant connectivity patterns, but still with most of NEAT's advantages under the hood.

scibot90002 karma

I've heard that the state of artificial intelligence today is similar to the state of home computers in the early 80s: exciting and full of possibilities, but intimidatingly esoteric the the layman.

I've been interested in machine learning but the curve seems too steep compared to, for example, HTML where you can just hit Ctrl+J and modify the inner workings of any web page that you visit. AI / ML is hard to play with, since it seems to take place inside a black box, so to speak.

Have you thought about this at all? Do you see any way to make machine learning more accessible to newbies?

KennethStanley4 karma

One way I've tried to make ML more accessible to newbies is to put it inside games where you don't need to know anything about programming and instead you can simply interact with the learning algorithm to get an intuitive feel for what it can do. For example, NERO (NeuroEvolution of Robotic Operatives) is a game where you basically set the fitness function through sliders and watch its effects (though it's not necessarily described in those technical terms to the player): http://nerogame.org/

Picbreeder is a great way to experience the structure of a search space: http://picbreeder.org/
It can also teach you about neural representation by clicking on its "DNA" buttons below evolved images.

In general, the field of ML needs to work on communication to make the field more accessible. A lot of stuff indeed is very difficult to read or absorb, and it could be presented better for newbies. I try to be mindful of this issue when we publish papers in my group. Obscure jargony writing is ultimately elitist because it excludes outsiders.

sudoman2 karma

Are there many non-academic jobs available for those who study neural evolution in school?

KennethStanley7 karma

Not as many as in deep learning, but there are some. But you have to keep in mind that trends come and go, and also, that neuroevolution can be viewed actually as part of deep learning. After all, NEAT was evolving neural networks of many layers before the current deep learning trend started. But the larger point is that in research the best future bets are often lying dormant right before they explode. That's what happened with neural networks and deep learning. Evolution has the nice property that it loves parallelization and enormous computation, so you have to expect its results too are going to start looking quite interesting.

That said, we've had defense funding and commercial funding for our research. The auto industry, the video game industry, and others have an interest in neuroevolution. Galactic Arms Race, which is entirely based on neuroevolution, has sold commercially. And on top of that, the general expertise developed by a neuroevolution research is applicable to many types of problems, even outside neuroevolution. I would not be worried about finding jobs with a Ph.D. in this area, but whether the job is specifically about neuroevolution will depend how well you can sell what you did.

debau2 karma

With the rise of Deep Learning, for example Long Short Term Memory networks have been revived which have a very specific biologically-inspired topology. And they work great for time-series data. Do you know if NEAT creates or prefers certain topologies for certain tasks? If I feed it a lot of time-series data, will it come up with a LSTM-like topology?

KennethStanley4 karma

This is a very interesting question. In general we might hope that evolution would actually discover architectures that humans have devised cleverly by hand, or that nature devised through its own real-world evolution. I think we do sometimes see hints of this kind of phenomenon. For example, we see something like topographic maps (which we also see in the real cortex) evolving on their own in HyperNEAT: http://eplex.cs.ucf.edu/papers/gauci_nc10.pdf

(Note that while these fields are reminiscent of convolution, unlike with convnets, no one told HyperNEAT about the concept of convolution from the beginning, so it's discovering this type of structure on its own.)

Yet we also have to keep in mind that the deep learning world is trying to work around issues that might not come up in evolution, like vanishing gradients. For example, LSTM is really a way of addressing gradients vanishing through SGD. In evolution, it isn't working in the same gradient-based context so it may not need to develop such a workaround. So it's hard to say what we should expect, but in practice I don't think we've seen something like an LSTM evolve in any case.

Nevertheless, as neuroevolution algorithms do become more sophisticated I do believe we'll increasingly see architectures arising that inform our thinking. That's one of the more intriguing benefits of evolving neural networks, and something that can feed back into the larger deep learning field.

Jehovacoin2 karma

Hi Ken, I'm a huge fan of Neural Networks and other types of machine learning. I fancy myself an amateur ML programmer, and have tons of fun with the things it allows me to create. My question for you has to do with your recent work about innovation and the problems caused by pursuing objectives directly.

Do you think the inability to find a unification theory is a result of this process? I have often wondered if our inability to find explanations for natural processes (like the problem of dark matter for instance) is a result of the same type of phenomena that we see when we try to train complicated neural networks, such as overfitting. Sorry if I have not explained my question enough.

KennethStanley4 karma

Following on Joel's points, I do believe our work on the problems with objectives shines some light on why some grandiose endeavors seem perpetually stalled. It can be dangerous to frame an entire enterprise around an ambitious objective. Something like a unified theory, or immortality, is so ambitious, that almost surely the stepping stones on their path will be laid by those not pursuing the ultimate objective. That should scare anyone who creates for example a foundation with one of these goals as the sole driver of the venture. By the way you can see the complete arguments for this insight in our book, which is linked at the top, but it largely relates to the deceptive nature of complex search space. In effect we're often treating ridiculously complex problems as naive optimization problems, when what we really need to do is explore broadly and without constraint. It's a tricky problem.

It's also just interesting once again that work on search algorithms in computer science has led to such insights about broader human endeavors. I think it shows just how much about the human condition research in AI and ML can reveal - if we are really going to uncover the mechanisms behind our minds, then we are inevitably going to uncover a lot of about human reality along with it. That's an aspect of AI that is often less discussed.

wowdisco2 karma

[deleted]

KennethStanley6 karma

It's not an unusual point of view. I've even heard deep learning pioneers joke something similar themselves. I think there's some truth to it - in deep learning we're looking at things like backprop and convolution that are not really new doing great things. On the other hand, a lot of the details are new, like the activation functions (ReLUs instead of sigmoids), dropout, etc. People have figured out the right way to tune these structures to work as well as possible. There are also genuinely novel structures like LSTMs out there.

Ultimately I think this kind of critique might be a red herring. The real question is, does it matter? If I think of an idea and it pans out only 15 years later, is the outcome any less exciting? Maybe more powerful machines were the catalyst we needed, but now that they are revealing something about what's possible, that's great. I consider it real progress.

On the other hand, neuroevolution is not as well known as deep learning yet in neuroevolution you do have genuinely novel algorithms without clear precedent. NEAT is not like anything that came before it, nor is HyperNEAT or novelty search. Not only that, but NEAT was itself evolving deep networks before deep learning. These algorithms are providing insights about something different, but complementary to deep learning. We should all be talking more.

I don't mean to suggest anything is superior. If you follow the philosophy of our book, all these paths are worth pursuing and we don't need an objective comparison to know that's true. Let's celebrate all the insights we can get and try to unify them as much as possible.

arazaes2 karma

Hi Ken

I’m Adam Stanton, I work in Dr Channon’s group over at Keele, UK. I have some awareness of the broader questions that crop up when we think about open-ended evolutionary systems, so my specific question is this: do you see novelty search contributing in any way to the ongoing debate about what OEE is and how we can robustly achieve it in an artificial system?

Cheers Adam

KennethStanley4 karma

Great to hear from Alastair Channon's group. I'm very interested in open-ended evolution (OEE), as you probably know.

Novelty search has at least profoundly impacted my own thinking on OEE. Historically I think a lot of people investigating OEE put weight on fitness and adaptation, but I've come to understand these as objective concepts, which actually work in conflict with divergence, which is the path of novelty. So I think many of these artificial worlds are actually doomed to convergence because of the weight they place on competition and fitness. I think of them sort of as "death match Earths," places dominated by the kill-or-be-killed mentality. That is just not conducive to an explosion of diversity and rather exemplifies the most rotten and stagnant corners of Earth. Evolution is most constrained when it can't take any risks, and when any deviation from top fitness risks sudden death, you won't see a lot of creativity.

Novelty search is kind of the ultimate risk taker. It is willing to go anywhere, no matter how poor the performance, as long it's something new. I think it exposes ultimately a philosophical issue at the heart of OEE regarding what we actually care to see from such a system. Random search is the most open-ended of all, but terrible for producing anything interesting. We want a divergent search process that hits the right stuff.

Novelty search also shows that novelty alone is not the full answer to OEE because simply running novelty search in a closed world will not lead to endless interesting stuff - there are only so many paths in a closed maze before we just don't care anymore. So novelty search exposes a factor - divergence - that matters a lot, but not the whole answer to the problem.

By the way you can see some more of our thoughts on OEE in our work on Chromaria: http://eplex.cs.ucf.edu/papers/soros_alife14.pdf

LITERALLYCHEERILEE2 karma

What is it like juggling academics and game development?

KennethStanley5 karma

You have to figure out a way to make the games academic! There are academic conferences on AI and games like IEEE Computational Intelligence in Games (CIG) or AAAI's Artificial Intelligence and Interactive Digital Entertainment (AIIDE). So you can publish in these areas and have it help your academic career.

That said, developing near-commercial-quality games like NERO, Galactic Arms Race, or Petalz is hard work. You have to be inspired to be willing to put in the time it takes. But it's really exciting and rewarding, so that keeps me motivated. After all, unlike almost any other academic project, thousands (or more) of people can end up experiencing your AI through your game, so you're really impacting a bigger population than you otherwise could and exposing them to technologies like neuroevolution that otherwise would remain obscure to them.

With some great colleagues, I founded http://www.aigameresearch.org/ to support other researchers sharing their AI-based and ML-based games with the public. I think it's a worthy cause, and also fun.

xxlolxx3452 karma

What do you think about OpenAI? Would you work for them?

KennethStanley3 karma

I think they are starting out with a talented group of people and have the potential to do groundbreaking work. They're in competition with a lot of other great places in academia and industry, but there's room for all of it right now. Any of these groups could end up doing something evolutionary, so I wouldn't want to place a bet on just one. Often the most revolutionary ideas come from where you least expect it. But OpenAI will no doubt be a positive contributor.

Regarding their ideological position on AI, I think it's great that some people take the open approach, but it's okay that some people don't as well. There's room for a lot of different philosophies and approaches. They are also trying to make a positive statement for the future with their perspective, and the motivation there is good. That said, I think it will be hard really to control where the biggest leaps happen or who will own them. But to get to human-level AI will take so many leaps that it's highly unlikely in my view that one group will have proprietary access somehow to all of it.

I can't really say whether I'd work for them given that it's a hypothetical without any context, i.e. under what conditions? There is probably some set of conditions under which I'd work for them.

vogon1012 karma

For your research, what languages would you mainly use? Would it be object-oriented, functional, both (ie scala), something like python or something closer to the bone like C. Do you use DSLs (Domain specific languages) and if so which?

Sorry for all the questions but this is one of my favourite areas of cs, I recently did a project that was evolving neural nets (on a much, much simpler scale of course). Thanks for doing this AMA

KennethStanley2 karma

I'm pretty agnostic about languages, but probably most of the popular code at least in neuroevolution is in C++ and C#. But you can find NEAT and its variants in almost any language (http://eplex.cs.ucf.edu/neat_software/). People also like to put Python in front of some of these. Basically, I'd use the language you find most comfortable.

TheDutchDevil2 karma

As a computer science student interested in picking a intro to AI course and thinking about specialising in AIs in my master. What makes the field so great? What (fun) problems do you solve that you don't see in other fields?

KennethStanley3 karma

What makes the field great is that it is one of the most profound intellectual problems of all time (up there with the unification of physics or the origin of consciousness), yet unlike physics so much of AI is still so wide open that almost anyone can still make big contributions. It is a huge sandbox with tons of stimulating ideas and a lot of low hanging fruit remaining, and it's recently making enormous strides, which keeps it in the popular imagination and fuels and supports its progress. As far as problems, anything a brain can do is open for AI research, not just technical engineering problems like how to control a robot, but how to create art and music as well. Everything we do (and more) is on the table with AI.

LinuxUser12 karma

What is in your opinion, some skills that are necessary or useful for learning machine learning or AI algorithms? Thanks in advance.

KennethStanley3 karma

The main skill is to know something about programming. Probably some math would also help, but what kind of math really depends on what kind of algorithm (there are many), so I would not assume you need to be a math genius. A bit of knowledge of neuroscience, biology, and/or psychology also can't hurt for context. But basically I think anyone who can learn to program can learn AI and ML at least as a hobby.

FellTheCommonTroll2 karma

What is your favourite use of learning algorithms that you can play with? Something along the lines of Neural Guppies.

KennethStanley2 karma

Check out some of the games and toys from our own group: NERO, Galactic Arms Race, Picbreeder, Petalz. All of them are linked at the topic in the AMA introduction. They're all about playing with learning algorithms. Also see http://www.aigameresearch.org/ for a whole collection of these kinds of games from our group and others.

Deity_of_Reddit2 karma

Are you making Skynet? why?

KennethStanley3 karma

I don't think anyone in AI wants to build Skynet. Probably no one outside AI wants to build it either. Who wants autonomous killer robots roaming the world? We mainly want to build things that help people and make the world better.

jrob68152 karma

Has your algorithm ever made you doubt your reality?

KennethStanley1 karma

Maybe not as far as "doubt reality," but when it first hit me that the best way to achieve something might be to stop trying to achieve it, that did change my view of reality. I'd always been taught (like many) that the way to achievement is to set an objective and then work towards it. All the search algorithms I knew about approached search in this way. So all my assumptions were suddenly upended. And within seconds I was thinking of radically different ways something could be learned or solved. My whole way of thinking shifted almost instantaneously. It was a shock and a rush. That reality-distorting effect of this insight is one of the reasons we wrote the book.

ThatDarnSJDoubleW2 karma

Any libraries that you'd recommend for neuroevolution algorithms?

FeverishPuddle2 karma

do you come up with the acronym first and then figure out what each letter means? I mean, every acronym I hear is so perfect. you'd think at least one would be like GGTHYLK or something

KennethStanley1 karma

That's funny, you're right why aren't there more acronyms like GGTHYLK? It was a long time ago that I came up with the acronym NEAT, but I think if I remember right it was one of a few I thought of that describe the algorithm succinctly. So yeah I can't really say it was just total random luck that it worked out so "neatly."

cybelechild2 karma

I hope there is time for more questions. Mine is kind of specific. So I've been using NEAT and its derivates for a few years now, and there is a thing that bothers me about hyperNEAT and ES-HyperNEAT. As i understand the idea behind having a CPPN among other things is to have smaller genomes that can create much larger phenotypes. However, pretty often I'd run in the case where a CPPN ends up being much larger than its phenotype. I think it becomes more pronounced when you add things like connection cost to the fitness. So, how do you overcome such things and minimize the genome as well as the phenotype? I guess including yet another objective about that is an option ...

Thanks in advance! I hope my question makes sense

KennethStanley1 karma

I completely agree that you don't want to end up with a CPPN bigger than the substrate (i.e. the phenotype). If that's happening, it's not really doing what we hope. It sounds to me like a bit of a technical issue - the first thing you'd want to look at it mutation rates. It sounds like structure in the CPPN is growing perhaps too fast in your experiments. These algorithms need time to optimize new structure as it appears. That said, of course there are other possible issues at play here and it could just be that the problem is posing a serious obstacle to HyperNEAT. You may also want to consider that often the most elegant and compact CPPNs evolve under non-objective conditions, i.e. with at least some novelty in the mix, or even only novelty search. Objectives tend to an accumulation of structure over time.

I would be hesitant before adding a new "simplicity objective" because that's kind of ad hoc. My guess is that there are ways to slow down the CPPN growth in your experiments that could be more satisfying, but of course the devil is in the details, which I don't know.

seriouslyliterally2 karma

I've always wondered, what will it take to move toward some kind of sentience? Obviously much more powerful computers will be needed but what kind of algorithms? Is it enough to create a recursive algorithm that's designed to continually learn more? Also, I find similarities between the blockchain idea and consciousness. Anyway, these are just the thoughts of someone who knows nothing about AI. Feel free to shoot my ideas down...

KennethStanley1 karma

I don't think we know enough about sentience or consciousness to comment intelligently on them from an algorithmic perspective. Even if we grant that these concepts are perhaps only vaguely defined, the algorithms that enable them remain mysterious. I'm not trying to skirt the issue, it's just that I think in all humility that we have to admit that the algorithms we have today, while making good progress, are not yet illuminating these high-level questions.

Of course, we can still comment on them philosophically, and I think there are some interesting discussions to be had there even today, but that's different from throwing out algorithmic suggestions.

AnvaMiba2 karma

Thanks for doing this AMA!

In the paper Critical Factors in the Performance of HyperNEAT van den Berg and Whiteson provide some negative results obtained on their implementation of HyperNEAT which call into question the ability of HyperNEAT to learn in complex ("fractured") tasks.

In your response CPPNs Effectively Encode Fracture: A Response to Critical Factors in the Performance of HyperNEAT you refute van den Berg and Whiteson negative results using your own implementation and PicBreader, attributing the negative results to improper hyperparameter selection and other implementation details.

Does this imply that HyperNEAT is fragile and requires lots of fiddling and tweaking of hyperparameters and other implementation details to make it work?

I think this is an important point, because the main attractive point of structural neural evolution like HyperNEAT compared to gradient-based deep learning is that structural neural evolution directly optimizes the network topology and architecture while in conventional deep learning the network topology and architecture are high-dimensional discrete hyperparameters and getting them right is kind of an art. But if HyperNEAT is no less sensitive to high-dimensional hyperparameters and architectural details than conventionan deep learning, then what is its advantage?

KennethStanley1 karma

Good research question here. I think you're right that one implication is indeed that HyperNEAT is sensitive to its parameters, just as deep learning algorithms are. But I think that's more of a superficial implication, and misses the deeper one. Let me give you my take:

First, I don't think it's really accurate to say that "the main attractive point of structural neural evolution like HyperNEAT compared to gradient-based deep learning is that structural neural evolution directly optimizes the network topology and architecture while in conventional deep learning the network topology and architecture are high-dimensional discrete hyperparameters and getting them right is kind of an art." This kind of sentiment pits different approaches against each other as if they are adversaries that have to fight to death with only one surviving. That's not in my view a healthy way to conceptualize the field of AI/ML or what's really going on in it. In the long march to high-level AI, these methods are all just stepping stones, and the value they bring is ultimately the conceptual building blocks they add to the conversation. Deep learning and HyperNEAT are adding completely different yet complementary conceptual insights to the conversation. So I think they're both important contributors and one does not have to have an "advantage" - these are really apples and oranges.

That said, the deeper point of our response (which you linked) is that in the end, you get much better representations out of indirect encodings like HyperNEAT when they are not entirely objectively driven. This is a subtle yet fundamental insight, and it does relate to deep learning because all of deep learning is objectively driven (so it can't yet benefit from this observation). There is currently no analogue to novelty search in deep learning. But in the world of neuroevolution, we have these non-objective algorithms like novelty search (which now has many variants), and these lead to quite elegant representations. So if you want to really see HyperNEAT shine, run it in at least a partially non-objective context (e.g. by combining novelty search with some objectives) and you will start to see a lot of very interesting structure in the genetic representation that encodes the network.

So really what we're looking at is not so much an "advantage" but rather the ability to investigate and observe phenomena that do not even exist in the world of deep learning, where there is no such thing as a non-objective search process (even unsupervised deep learning algorithms are driven by the objective of minimizing an error). We should be investigating these things because they can come back to haunt deep learning as well. We are also learning with HyperNEAT about search and how it interacts with an indirect encoding, giving us a lot of insight into evolution in nature. So we would not want to couch such an investigation as a superficial competition between methods.

Everything has parameters. The universe has parameters like gravity and the speed of light. Is the universe any less impressive for having evolved human brains by virtue of its potentially brittle parameters? Let's not get carried away with pinning all our admiration for a method on its need for good parameter settings. At a practical level, HyperNEAT's advantage is that it allows you to do things you can't do as easily with gradient methods because you don't need to compute a gradient to set the fitness function in HyperNEAT. At a more theoretical level, the value of these methods in the long run is in what they teach, and indirect encoding (as in HyperNEAT) is teaching us different lessons from what we're learning in deep learning. We should not stop investigating any of them until we stop learning from them, and there is a ton left to learn from the world of indirect encoding.

sudoman1 karma

Do you think that artificial general intelligence will be accelerated most by neural evolution or hard-coded logic?

KennethStanley8 karma

I think it's more likely to come from learning than from something hard-coded. However, that doesn't mean everything will necessarily be evolved. It can also be gradient-based, or something else. But evolution can play an important role because it is a sandbox like no other. In other words, because it isn't based on computing an explicit gradient, you have more freedom to try exotic representations and diversity techniques, which has led to a lot of deep insight.

sudoman1 karma

Do you predict that evolution of artificial neural networks will play a significant role in future artificial general intelligence? if so, what will those roles be?

KennethStanley3 karma

It's hard to say what technology will be inside some futuristic AGI. But I think neuroevolution will play a productive role by providing a fountain of ideas. That is, by being so flexible and so open-ended, evolution can play a creative role and expose possibilities we may not have anticipated. It has already done things like that by revealing the problem with objectives and the power of novelty. These are insights gained by doing experiments with neuroevolution. Whether or not these specific techniques literally end up in the AGI, most likely they will at least inspire the conceptual foundations of such an endeavor. Also, future artificial brains evolved through neuroevolution may exhibit architectures and dynamics that teach us something about neural networks that we do not presently know, just as natural brains provide some inspiration for the algorithms in neural network research today.

sudoman1 karma

What is a typical day like for you?

KennethStanley4 karma

My typical day is tons of meetings. Meeting with students about research, meeting with outside groups on collaborations (maybe through Skype), committee meetings, etc. Then maybe a bunch of intense work on editing papers and answering emails. If I'm lucky there will also be some time for thinking. Thinking is really important if you want to come up with something new, but you have to be in the right mood. You can't force it. At night I try to devote my time to my family, but if I still have more work I might do it late at night when it's least disruptive to them.

CBlumey1 karma

How necessary do you think that sexual reproduction is in providing variance to Genetic Algorithms? Is it possible to obtain the same results through other measures like higher mutation rates?

KennethStanley2 karma

There's been a lot said on this topic. Commenting just from my own experience with these algorithms, my feeling is that sexual reproduction or "crossover" offers some performance advantages though they are not necessarily dramatic. Maybe more like up to a 30% improvement. And fiddling with mutation rates probably won't make up for it. So if you really care about that 30% then it might matter, but it's not fundamentally important as something like speciation in NEAT, without which the algorithm is seriously damaged.

Ktownkemist1 karma

I've always wondered how hard it would be to code for evolution. Is there a way to create a program that can mimic the ability of DNA and life to create emergent properties out of basically thin air? Would it be feasible to create a program with the ability to make a series of let's say files that duplicate each other but with say a random chance for an evolution like event in the code and then perform a fitness test on the new files only keeping the ones that meet a certain criteria, say initialize speed or ability to execute without error?

KennethStanley3 karma

You're basically describing evolutionary algorithms. So yeah, you can do that and people have been doing it for decades. My research group has been investigating them for 10 years: http://eplex.cs.ucf.edu/

D33f1 karma

Would you oppose an evolutionary approach towards creating a general/strong AI?

As a cs student I know ethical questions surrounding AI are about as far from your work as Maxwell's equations are to a camera man, but some people seem have some strong ethical concerns against evolving an AI since it would involve creating and killing millions of "intelligences" to evolve a system with the required amount of intelligence.

If you have any, I would love to hear your thoughts on the subject.

Gravity_Salad1 karma

How close are we to constructing a natural human-like neural net? Do we have enough information on how to theoretically build something like that yet, or are we limited by current technologies and don't have enough resources, or both?

KennethStanley4 karma

I think we're a long way off, like decades or centuries. My feeling is that the problem is not so much hardware. I'll think we'll get to a point in the next couple decades where perhaps we have the hardware to support simulating a human-level brain. Rather, the problem is the supply of ideas: I think we still need a succession of several Einstein-level revelations to get to human-level AI. Maybe on the order of ten of them. You just can't predict that kind of thing systematically. They happen when they happen. So it could be a while. That said, just because we don't hit human-level doesn't mean that the changes we do see in the next 20 years cannot be transformative. On the contrary, AI is already transforming society and will continue to do so, but perhaps not at the human level for some time yet.

rizzit151 karma

How does somebody get into this sort of field of work?

KennethStanley2 karma

Typically you would major in computer science and take some AI or ML classes as an undergrad, and then do a Ph.D. in those areas after college. For any particular area like neuroevolution, when you do your Ph.D. you need to find a school with a professor specializing in that topic who can serve as your Ph.D. advisor.

imuntean1 karma

Ken, I am fascinated by the idea beyond NEAT. I am looking forward to read your book. Here is my question: do you envisage an application of the NEAT in social science? I am thinking of psychology, cognitive science etc. The existing applications are most in gaming and engineering. Is your architecture suitable to be applied to social sciences (ex: nominal variables, discrete fitness fuction etc.)?

KennethStanley4 karma

In a way NEAT is already yielding insights into social science. Our book is a good example of that - the ideas there, which apply to how we run social institutions (primarily those aimed at innovation) really would not exist if there had not been NEAT (which led to Picbreeder, which led to novelty search, which led to the ideas in the book). So that's an indirect kind of impact, but I encourage people to see how a machine learning algorithm can actually impact thinking about social progress and human innovation, because it's kind of surprising that that would come about.

For more direct links, like literally running NEAT of HyperNEAT on a social science problem, sure, I do think it can be useful there. It can serve as an optimizer, but maybe more interesting would be problems where there are many answers, the kind where a whole repertoire of perspectives is what you want the algorithm to return. New quality-diversity-based neuroevolution algorithms like NSLC (novelty search with local competition) or MAP-Elites can offer those kinds of repertoire responses.

karanbudhraja1 karma

Hello Dr. Stanley. I am a PhD (Computer Science) student and plan to work on neuroevolution in a swarm environment. Could you please give me some advice or direction in this area?

KennethStanley4 karma

Jorge Gomes and his colleagues in Portugal have done a lot of nice work with novelty search for swarms and multiagent behaviors in general. For example: http://arxiv.org/abs/1304.3362 and http://arxiv.org/abs/1407.0576 I'd check out all the work from that group.

Also, you might find multiagent HyperNEAT interesting; it can evolve a whole team from a single genome: http://eplex.cs.ucf.edu/papers/dambrosio_evin13.pdf

RealPeterNorth1 karma

Is your brother in KISS ?

KennethStanley3 karma

Someone should ask him that about me if he does an AMA!

JohnLeTigre1 karma

I'm not an expert so correct me if I'm wrong but neural networks are bound by the actual weighting implementation and the fitness evaluation function which are determined a-priori by the programmer. A call this mimicking, not learning.

There is no system to simulate: self-awareness, concepts, meaning, input senses, intentions, self-reflexivity, etc.

The main things that neural nets simulate are fixed needs and a feedback system.

Although I think that neural nets will greatly participate in AI (and they actually do at this very moment), I mostly consider that AI should be a collection of sub-systems.

If this is the case, what sub-system would neural-nets be good at? and do you think we should focus more on good-old symbolic systems?

thanks

KennethStanley1 karma

I see where you're going but I think neural networks are not really as constrained as you're worried they are. While it's a legitimate issue to raise questions about, ultimately you don't have to supply a neural network with all this rigid constraint up front and just get an answer out at the end. That's just the most stereotypical and perhaps more publicized way they can be used. For example algorithms like novelty search (http://eplex.cs.ucf.edu/noveltysearch/userspage/) don't work like that at all. You don't even tell novelty search what you're looking for so there is no a priori expectation about what it will produce. In deep learning, unsupervised methods can similarly lead to creative or surprising constructs that are not predictable a priori, or even the ability to generate novel ideas (instances of a class) dreamed up by the neural network alone.

In fact, why theoretically should artificial neural networks not be able to do anything that human brains do? Humans brains after all are neural networks. Perhaps there is something special about the physical makeup up brains that would prevent neural networks from doing the same, but that's only speculation right now. Of course there are very important things we don't know how to do with artificial neural networks, but that's different from saying that they can never be done.

You're right though that self-awareness, meaning, etc., represent massive challenges. Will neural networks solve them? Maybe. I wouldn't want to pretend to know the future. But I don't see a reason they have any less potential than symbolic systems or anything else. Ultimately, given that neurons are the only things in the universe that actually have produced self-awareness, betting on artificial neural networks (which are at least inspired by neurons if not exactly same of course) as having at least a shot doesn't seem too crazy to me.

Jfro891 karma

Hi, your work is utterly fascinating but I really want to know, what would be your plan for surviving the zombie apocalypse?

KennethStanley3 karma

Here is an interesting fact I learned when teaching a class on AI for Game Programming: If you tell students to come up with their own games, about 50% of them will involve zombies. Apparently you are not the only one worrying about the zombie apocalypse. Maybe playing a lot of zombie games can help you prepare.

Zulban1 karma

Hey! I'm very interested in comparing machine learning to human learning. What they are both bad at, good at, or better at. Could you recommend any books, articles, or authors to me?

KennethStanley3 karma

This is a tough one because AI and ML keep improving, so the answer for the AI changes day by day. In short, AI is catching up to humans, but still has a long way to go. I think you'll find most high-level cognitive tasks are still dominated by humans. But AI and ML are catching up on the low level stuff that's closer to perception. That's a very rough attempt to characterize a complicated situation that keeps changing.

TheSlimyDog1 karma

I've looked into your book and I find the concept quite interesting. Have you considered the fact that objectives drive life at a much faster pace than natural evolution thereby making them more suited for the short lives that we have? People can certainly achieve greatness, but I think that without objectives, it's very difficult if not impossible for one person to get there just by chance.

PS: I'm currently an undergraduate studying Computer Science who's very interested in subjects like Neural networks, Machine Learning, AI, NLP, and Big Data. Where should I start? Should I first study the Statistics and Math or focus on the theory of the subjects and learn the Math as and when it comes? And what subject would you consider the root of all of these?

KennethStanley1 karma

It may seem like living without an objective is impractical, but actually most successful people did just that. They followed the path of interestingness even if they did not know where it leads. Our book is full examples of individuals with this kind of life story, especially in chapter 2 (Victory for the Aimless). So I'd be very interested if you would still hold your view after reading the book.

Just to give one well-known example, Steve Jobs dropped out of college with no clear objective. Think of all the people who stay in college to pursue their personal objectives. He didn't. Instead, he dropped out so he could do whatever he felt like doing, which included sitting in on a calligraphy class. While it wouldn't have supported his major, now that he had no major he could sit in on whatever he wanted. And that led to the idea of screen fonts in the early Macintosh computers, which revolutionized the computer industry. Good thing he didn't have a clear objective. People who are radically successful often follow the winds of serendipity - they set themselves afloat with no clear direction in mind and catch the wind of opportunity when it blows their way. That is not an objective approach to life. However - and the book makes the following point clear - if your aspirations are modest then objectives do make a lot of sense. Like if you want to major in computer science, of course by all means major in computer science like millions who have come before you and make it your objective. It will probably work out. But if you want to change the world and arrive at a radically new and innovative place, objectives are not the best compass to get there. The book has plenty of evidence for that, both from real life and from hard empirical algorithmic experiments on search processes run successfully without explicit objectives.

On your second question, my suggestion is to skim basic ML stuff, like say in a book on neural networks and maybe one other topic of choice, and don't worry about it being confusing. Instead, decide what classes to take based on what you read that you wish you knew more about. In other words, follow your interests - don't be too objective about it. You'll end up better at doing things you like rather than things you think you "should" be doing. There are many paths to being a good AI researcher. That said, of course some math will be important.

Maltheal1 karma

I'm currently a freshman in college, majoring in CS and math. It's been my dream job for awhile to do something in the field of AI. What would you suggest to do to start working my way towards that field?

KennethStanley4 karma

I'd take an AI class if you can, and also take a look at deep learning and neuroevolution on your own time given that neural networks are hot right now. Then, most importantly, once you've surveyed today's ideas, figure out which one really inspires you. What matters is where your own instincts lead you.

Variable_Decision531 karma

Hi Ken Stanly, thank you for doing this AMA. How did you find yourself working in this field?

KennethStanley3 karma

I've been interested in AI since I was in third grade and my family bought our first computer, a Commodore 64. Back then, I wanted to program it to have a conversation with me, and I was really disappointed to find out there isn't a book you can buy that tells you how to do that. I didn't quite understand at the time that AI is one of the greatest unsolved problems of all time! Since then I've been hooked on figuring out how it works and how it happened (hence the evolution part).

philanthropomath1 karma

Can't believe it's been 5 years already since I saw you guys in the lab, great progress!

Would you say your research could push ahead self driving cars to be safer and more adaptive?

KennethStanley3 karma

You saw us in the lab? I'm not sure who this is? In any case, sure, we might see neuroevolution influence self-driving cars, and automotive issues in general. I actually worked with Toyota on a few patents involving neuroevolution and cars.

sudoman1 karma

Have you considered designing cell layout by mapping cells and connection weights onto fractals?

KennethStanley5 karma

Neuroevolution can be a good approach to evolving various kinds of structures and architectures (i.e. not just neural networks). CPPNs (a special genetic representation used in both HyperNEAT and Picbreeder) have begin to be used for these kinds of applications. For example, Nick Cheney's work on soft robots really exemplifies the idea: http://dl.acm.org/citation.cfm?id=2754662 and http://www.sciencedirect.com/science/article/pii/S0304397515005344

Or see http://endlessforms.com/ where the images are generated by NEAT-evolved CPPNs.

sudoman1 karma

Thanks for the interesting info, but It turns out that my question wasn't very clear.

I had thought that HyperNEAT's internal connectivity geometry was expressable as a multi-dimensional, or multi-layered, image. So I thought: why not define the geometry to be based on some tiny evolutionarily-defined area within a pre-existing fractal? One could simply evolve the location of the area and make slight modifications to the fractal equation in order to create novel network geometries. Now that I've read a bit more about HyperNEAT, it seems like its network creation works differently than I expected.

KennethStanley3 karma

Ah okay sorry I missed the point the first time. Actually I think you are understanding HyperNEAT correctly and you could do what you're suggesting. That's just a different functional representation than the convention CPPNs in HyperNEAT, but sure you could do it - you could situate the neural geometry in some part of a fractal and evolve from there. The key question thouhg is whether that offers any advantage - should neural geometry be somehow fractal? There are some ideas in perception that are fractal (after all self-similarity at different scaled does emerge in perception), but it's not clear if that really translates literally to a fractal geometry among the neurons and connections themselves. I'd say before you choose any geometry for the CPPN it's a good idea to think through why that geometry might be aligned with your problem domain.

letoseldon1 karma

Thank you for doing this AMA. The holy grail of ML is unsupervised learning, and the challenge for the foreseeable future seems to be centered on taking advantage of the vast quantities of unlabeled data on the Internet.

What approach do you find most promising for unsupervised learning?

KennethStanley5 karma

I agree that unsupervised learning is a huge opportunity area. I think the biggest progress there is coming from generative approaches and autoencoders on the deep learning side, but also neuroevolution has a lot to offer here. Consider that novelty search is really a kind of unsupervised learning algorithm - it's a way of uncovering what's "interesting" in the search space with no specific objective in mind. That's why we started exploring this idea with the "Divergent Discriminative Feature Accumulation" (DDFA) algorithm (http://eplex.cs.ucf.edu/papers/szerlip_arxiv1406.1833v2.pdf), which uses novelty search to collect novel feature detectors.

If you think about it, we as humans spend a lot of our time thinking of new ways to interpret the world, even with no particular goal in mind. Is a palm tree a tree or is it not a tree? We like to cut things up in different ways depending on the context. In a sense a divergent evolutionary search is a perfect metaphor for this kind of passive thinking. So I think there's a lot of potential there.

sudoman1 karma

As an associate professor, do you write lots of software or do your students implement most of your ideas?

KennethStanley3 karma

We develop so much software that there's no way I could do it all myself. That's one of the advantages of having a research group - it really amplifies what you can do on your own. Most software is therefore primarily developed by the students in the group.

QueWeaHermano1 karma

What does the NEAT algorithm do? I'm not into that kind of things, I don't understand much, but I'm curious so...

KennethStanley5 karma

NEAT stands for NeuroEvolution of Augmenting Topologies. What it does is generate a bunch of small sort-of-random artificial brains (called "artificial neural networks") in your computer. Say 100 of them. Because they're all sort of random, they are pretty terrible at doing anything. Let's say we ask them to drive a robot through a maze. Because they're terrible, most will crash almost right away. But some will crash just a bit farther out than others. For those - the ones that do better - NEAT will allow them to have offspring - new little brains that will also try to guide the robot. These offspring will be slightly different from their parents (as offspring are also on Earth), so some might even be better. As NEAT keeps playing this game, it also allows some mutations to cause the brains to grow (that's the "augmenting" part of NEAT) so they can become more complex over evolution. If you let NEAT play this game for a long time (which is basically evolution), then the brains gradually get better and perhaps more complex (if it helps them get better) sort of like what we saw happen in natural evolution (well, at least metaphorically). Eventually we may see a robot that knows how to navigate a maze, evolved fully by NEAT without a human programmer needed. That's the basic idea: an algorithm to evolve increasingly complex artificial neural networks. NEAT uses some tricks to make that possible (including historical marking and speciation), but that's the basic story.

rhiever1 karma

Hi Ken, this is Randy Olson. Working with you at UCF had a huge and positive impact on my life: I wouldn't have gone to grad school if it weren't for your guidance early on in my career, and I certainly wouldn't be happily working as an AI researcher today. That said, I'm pretty excited to get the chance to ask you some more questions during your AMA!

Deep learning is tremendously popular right now, and with it connectionism has again risen into the spotlight of AI research. Where do you think neuroevolution fits in? What drawbacks of deep learning can neuroevolution fill, and vice versa?

KennethStanley5 karma

Randy it's great to see you here though I'm getting to your question late. I'm glad to hear our lab had a positive impact on your very successful career. Let me point you to my answer to another similar question at https://www.reddit.com/r/IAmA/comments/3xqcrk/im_ken_stanley_artificial_intelligence_professor/cy6u6sa

To address it very briefly here, brains evolved and brains learned. We would be remiss to expend all our effort one side of the equation but not the other. They are complementary. The resurgence in interest in neural networks is good for neuroevolution as well as deep learning. (Or you might even say that neuroevolution is part of deep learning.)

sudoman1 karma

In your opinion, what are the most impressive tasks solved by NEAT and/or other artificial neural network systems?

sudoman1 karma

Are you studying ways that evolved neural nets could possibly learn as they operate? Have any problems been solved with these approaches?

KennethStanley4 karma

Yes indeed we're very interested in evolving adaptive or plastic networks that learn during their lifetime. After all, that's what nature did. There is some progress in this area, for example my work with Sebastian Risi: http://eplex.cs.ucf.edu/papers/risi_ijcnn12.pdf

Andrea Soltoggio has probably influenced my thinking about evolving plastic networks the most.