EDIT 3

Thank you everyone for making this so exciting! I think we are going to call it a day here. Thanks again!!

EDIT 2:

Thanks everyone for the discussion! Keep the discussion going! We will try to respond to some more questions as they trickle in! A few resources for anyone interested.

Coding:

Introduction to Programming with CodeAcademy.

More advanced Python programming language (one of the most popular coding languages).

Intro to Computer Science (CS50)

Machine learning:

Introduction to Probability (Stat110)

Introduction to Machine Learning

Kaggle Competitions - Not sure where to start with data to predict? Would you like to compete with other on your machine learning chops? Kaggle is the place to go!

Machine Learning: A Probabilistic Perspective - One of the best textbooks on machine learning.

Code Libraries:

Sklearn - Really great machine learning algorithms that work right out of the box

Tensorflow (with Tutorials) - Advanced machine learning toolkit so you can build your own algorithms.




Hello Redditors! We are Harvard PhD students studying artificial intelligence (AI) and cognition representing Science in the News (SITN), a Harvard Graduate Student organization committed to scientific outreach. SITN posts articles on their blog, hosts seminars, creates podcasts, and meet and greets with scientists and the public.

Things we are interested in:

AI in general: In what ways does artificial intelligence relate to human cognition? What are the future applications of AI in our daily lives? How will AI change how we do science? What types of things can AI predict? Will AI ever outpace human intelligence?

Graduate school and science communication: As a science outreach organization, how can we effectively engage the public in science? What is graduate school like? What is graduate school culture like and how was the road to getting here?

Participants include:

Rockwell Anyoha is a graduate student in the department of molecular biology with a background in physics and genetics. He has published work on genetic regulation but is currently using machine learning to model animal behavior.

Dana Boebinger is a PhD candidate in the Harvard-MIT program in Speech and Hearing Bioscience and Technology. She uses fMRI to understand the neural mechanisms that underlie human perception of complex sounds, like speech and music. She is currently working with both Josh McDermott and Nancy Kanwisher in the Department of Brain and Cognitive Sciences at MIT.

Adam Riesselman is a PhD candidate in Debora Marks’ lab at Harvard Medical School. He is using machine learning to understand the effects of mutations by modeling genomes from plants, animals, and microbes from the wild.

Kevin Sitek is a PhD candidate in the Harvard Program in Speech and Hearing Bioscience and Technology working with Satra Ghosh and John Gabrieli. He’s interested in how the brain processes sounds, particularly the sound of our own voices while we're speaking. How do we use expectations about what our voice will sound like, as well as feedback of what our voice actually sounds like, to plan what to say next and how to say it?

William Yuan is a graduate student in Prof. Isaac Kohane's lab in at Harvard Medical School working on developing image recognition models for pathology.

We will be here from 1-3 pm EST to answer questions!

Proof: Website, Twitter, Facebook

EDIT:

Proof 2: Us by the Harvard Mark I!

Comments: 1469 • Responses: 58  • Date: 

nuggetbasket1101 karma

Should we genuinely be concerned about the rate of progression of artificial intelligence and automation?

SITNHarvard1985 karma

We should be prepared to live in a world filled with AI and automation. Many jobs will become obsolete in the not so distant future. Since we know this is coming, society needs to prepare policies that will make sense in the new era.

-Rockwell (opinion)

SITNHarvard1208 karma

Kevin here, agreeing with Rockwell: I think there's pushback against the Elon Musk-type AI warnings, especially from people within the AI community. As Andrew Ng recently said:

I think that job displacement is a huge problem, and the one that I wish we could focus on, rather than be distracted by these science fiction-ish, dystopian elements.

thewhiterider25644 karma

In what areas or sectors do you see AI taking a serious foothold in first (Medical, accounting, etc) and why?

SITNHarvard168 karma

Medical image processing has already taken a huge foothold and shown real promise for helping doctors treat patients. For example, a machine has matched human doctor's performance in identifying skin cancer from pictures alone!

The finance and banking sector is also prime for automation. Usually humans pick which stocks are good and bad, and buy them as they think will be best for the company. This is a complicated decision process ultimately determined by statistics gathered about each company. Instead of a human reviewing and buying these companies, now algorithms are doing it automatically.

We still don't know how this will impact our economy and jobs--only time will tell.

SITNHarvard171 karma

William here: there are different levels of concern. It is undeniable that advancements in AI and automation will eventually lead to some sort of upheaval, and there are real concerns that the societal structures and institutions we have in place might not be sufficient to withstand the change in their current form. Unemployment and economic changes are the central factors here. Existential risk is a more nebulous question, but I think there are more pressing issues at hand (global warming and the politics surrounding nuclear weapons come to mind). Maciej Celglowski has an interesting talk about how AI is likely less dangerous than the alarmism around it.

throwdatpun658 karma

Machine learning is currently a hot topic right now. What do you all think will be the next big thing in AI?

SITNHarvard1082 karma

Adam here:

From a pure machine learning standpoint, I think unsupervised learning is going to be the next big thing in machine learning. Researchers now feed data to a machine but know both what the data is (say an image of a cat) and a label (that it is a cat)! This is called supervised learning. Much of the progress in AI is this area, and we have seen a ton of great successes in it.

How do we get machines to teach themselves? This is an art called unsupervised learning. When a baby is born, parents don't have to teach it every single thing about the world--they can learn for themselves. This is kind of tricky because how do you tell a computer what to pay attention to and what to ignore? This is not very easy, but folks in AI field are working on this. (For further reading/listening, Yann LeCunn has a great talk about this.)

Windadct983 karma

Or that is "Not a hot dog"

what_are_you_saying149 karma

As someone currently writing a Ph.D. research proposal and constantly finding myself frustrated with conflicting results in publications with nearly identical experiments, I would love to see an AI capable of parsing through hundreds of research papers, being able to comprehend the experiments and methods outlined (likely the hardest part), then compiling all the results (both visual and text-based) into a database that shows where these experiments differ, which results are the most consistently agreed upon, and which discrepancies seem to best explain the differences in results.

I can't help but feel that once the database is created a simple machine learning algorithm would be able to identify which variables best predict which results and be able to find extremely compelling effects that a human may never notice. My biggest problem is trying to make connections between a paper I read 300 pages back (or even remember the paper for that matter) and the one I am reading now.

With the hundreds of thousands of papers relevant to any particular field it would be impossible for any researcher to actually read and retain even a small fraction of the relevant research in their field. Every day I think about all the data already out there ready to be mined and analyzed and the massive discoveries that have already been made, but not realized due, to the limitations of the human brain.

Are there any breakthroughs on the horizon for an AI that can comprehend written material with such depth and be able to organize it in a way that can be analyzed by simple predictive modeling?

SITNHarvard83 karma

Adam here:

That's a great idea! And pretty daunting. In the experimental/biological sphere, I have seen a service that scans the literature to find which antibodies bind to which protein. I think this is a much more focused application that seems to work pretty decently.

Tsundokuu70 karma

This is kind of tricky because how do you tell a computer what to pay attention to and what to ignore? This is not very easy, but folks in AI field are working on this.

I think you may be massively understating this. As you undoubtedly know yourself, this is called the 'frame problem', and a.i. research has been working on this problem for almost 50 years now without any progress. So it's misleading to say 'we are currently working on it' as if this is a new focus or recent development in research.

Do you have any opinions on Heideggarian A.I.?

SITNHarvard102 karma

Adam here:

Thanks for your response. I guess I was referring to the specific algorithmic framework for unsupervised learning--simply finding P(X). [i.e. a complicated nonlinear probability distribution of your data] Generative models are used for this; they are useful because they give you a way to somehow probe at the underlying (latent) variables in your data and allow you to generate new examples of data.

This has previously been tackled with the Wake-Sleep algorithm, but without much success, and then Restricted Boltzmann Machines and Deep Belief Networks, but these have been really challenging to get working and applied to real world data.

Recently, models like Variational Autoencoders and Generative Adversarial Networks have broken through as some of the simplest yet most powerful generative models. These allow you to quickly and easily perform complicated tasks on unstructured data, including creating endless drawings of human sketches, generating sentences, and automatically colorizing pictures.

So yes, I agree, folks are working on this, and have been for a long time. With these new techniques, I think we are approaching a new frontier in getting machines to understand our world all on their own.

edit: typo

SITNHarvard111 karma

Rockwell here. I have two opinions: natural language robots, and object recognition. I think these will be part of everyday life in the upcoming decades. We’ve already had a taste in the form of robot telemarketers and some AR apps. These will only get better with time and before you know it our phones may have Jarvis like capabilities.

Amilo159516 karma

How far are we from actual, realistic sex-bots?

SITNHarvard1606 karma

Depends - how good do you want the sex to be?

yoosteh375 karma

What is it like being graduate students at Harvard? Such a prestigious school, do you feel like you have to exceed expectations with your research?

SITNHarvard1049 karma

My therapist told me not to discuss this issue. - Dana

SITNHarvard641 karma

Haha, but seriously, imposter syndrome is certainly alive and well in the labs here...

That said, Harvard (and the entire Boston area) is a great place to study and work, and we are lucky to have so many resources made available to us.

SITNHarvard179 karma

Adam here:

This keeps me going every day: DO IT.

beastlyfiyah70 karma

Expected this

SITNHarvard118 karma

Adam here:

Yeah, when I'm short on time and need a lot of motivation, this one usually does the trick.

ninesquirrels348 karma

Recently the Facebook engineers turned off a machine-learning program that they were using to translate, which has been reported as having organically created its own language. Is this anywhere near as interesting as it seems on the surface? Why or why not?

SITNHarvard364 karma

Adam here:

So I think I scoured the internet and found the original article about this. In short, I would say this is nothing to be afraid of!

A big question in machine learning is how do you get responses that look like something that humans produced or that you would see in the real world? (Say you want a chatbot that speaks English.) Also, you have a machine that can spit out examples of sentences or pictures. One way to do this would be to have a machine generate a sentence as a human would, and then you tell the machine if it did a good or bad job. It is hard to have a human tell the machine if it did a good or bad job because it takes a lot of time and is slow. Since these are learning algorithms that “teach themselves”, they need millions of examples to work correctly, so telling a machine if it did a good or bad job millions of times is out of reach for humans.

Another way to do this is to have two machines doing two different jobs. One is producing sentences (the generator), and the other machine telling it if the sentences looked like some language (the discriminator).

From what I can understand from the article, they had the machine that was spitting out language working, but the machine that said “Does this look like English or not?” was not working. Since their end goal was to have a machine that spoke English, it was definitely not working, so they shut it down. The machines that were producing language did not understand what they were saying, so I would almost classify what they were doing as garbage.

For further reading, these things are called Generative Adversarial Networks, and can do some pretty cool stuff, like dream up amazing pictures that look almost real! Original paper here.

Edit: Sorry everyone! After speaking to a colleague, I think I found the actual research paper that was published for this as well as the Facebook research post where they discuss their work. They do not use Generative Adversarial Networks (though those are super cool). The purpose of the work was to get a machine that can negotiate business transactions via dialogue. They created about ~6000 English transaction dialogues where two people negotiate on purchases from Amazon Mechanical Turk (which isn't a terribly large dataset). They then had two chatbots produce dialogue that would then complete a deal, though there was no enforcement to stick to English. The machines were able to create "fake transactions" but they weren't in English, so the experiment was a failure. Facebook then must have some chatbots that do speak English well (but don't perform business transactions) lying around, so they were used to ensure what was being output was valid English.

MaryTheMerchant308 karma

Do you think there are any specific laws Governments should be putting in place now, ahead of the AI advancements?

SITNHarvard769 karma

The three law's of robotics suggested by Isaac Asimov.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Seriously speaking there should probably be some laws regulating the application of AI and maybe some organization that evaluates code if AI will be used in moral and ethical situations. The problem that comes to mind is the situation of a driverless vehicle continuing its course to save 1 person or deliberatley swerving to save 10 people. I'm not an expert though.

Rockwell

zach466920241 karma

Hi,

I'm a second year interested in AI and Machine Learning. I was hoping that in the future opportunities related to AI saftey would open up. Do you have any tips on things I should do, or courses I should take in this general direction? Thanks!

SITNHarvard381 karma

Adam here:

The folks at Google wrote a pretty interesting article about what are the safety concerns with AI in the near future. They had five main points:

1)“Avoiding negative side effects” - If we release an AI out into the wild (like a cleaning or delivery robot), how will we be sure it won’t start attacking people? How do we not let it do that in the first place?

2) “Avoiding reward hacking” - How do we ensure a machine won’t “cheat the system”? There is a nice thought experiment/story about a robot that is to make all the paperclips!

3) “Scalable oversight” - How can we ensure a robot will perform the task we want it to do without “wasting its time”? How do we tell it to prioritize?

4) “Safe exploration” - How do we teach a robot how do explore their world? If it is wandering around, how does it not fall into a puddle and short out? (Like this poor fellow)

5) “Robustness to distributional shift” - How do we get a machine to work all the time in every condition that it could ever encounter? If a floor cleaning robot has only ever seen hardwood floors, what will it do when it sees carpet?

For courses, this is a very uncharted area! I don’t think we are far enough in our understanding of machine learning that we have encountered these, but it is coming up! I would advise becoming familiar with algorithms and how these machines work!

Edit: Forgot a number, and it was bugging me.

haveamission141 karma

As someone with a coding background but no ML background, what libraries or algorithms would you recommend looking into to become more educated in ML? Tensorflow? Looking into neural networks?

SITNHarvard143 karma

Kevin here: On the cognitive science side, I'm seeing lots of people get into tensorflow as a powerful deep learning tool. For more general or instance-by-instance application of machine learning, scikit-learn gets a ton of use in scientific research. It's also been adapted/built on in my specific field, neuroimaging, in the tool nilearn.

Windadct112 karma

I am an EE with a number of years in Industrial Robotics and have monitored the Robotics Research over the years - which to me is really more about the Machine Learning and "AI".

I have yet to see any example of what I would call "Intelligence" - or original, organic problem solving. Or in a simple term - creativity. Everything appears to me to be algorithmic process with larger data sets, and faster access.

Can you provide an example what you would call true intelligence in a machine?

SITNHarvard167 karma

William here: I’ve found “AI” to be a bit of a moving target, we have a knack for constantly redefining what “true intelligence” is. We originally declared that AI should be able to defeat a human grandmaster at chess- it later did. The goalposts moved to a less computer friendly game in Go: AlphaGo prevailed in a thrilling match last year. So what is intelligence? Is it ability to beat a human at a game? Make accurate predictions? Or even just design a better AI? Even the definition you suggested is a bit fuzzy: could we describe AlphaGo as “creative” when it comes up with moves that human masters couldn’t imagine? There is even an AI that composes jazz. If we can make something that resembles creativity through an algorithmic process with large datasets, what does that mean? These are all interesting philosophical questions that stem from the fact that much of AI development has been focused on optimizing single tasks: composing jazz, predicting weather, playing chess, which is most easily done using algorithms/datasets. This is all to say that we need a good definition of what “true intelligence” is before we can look for it in the systems that we create.

WickedSushi98 karma

What are you guy's thoughts on the Chinese Room Thought Experiment?

SITNHarvard167 karma

Kevin here: to me, the idea that a computer can create human-like outputs based on normal human inputs but not "understand" the inputs and outputs intuitively makes sense. But I'm probably biased since I took Philosophy of Language with John Searle as an undergrad...

But okay, generally we think of "understanding" as having some basis in previous life experiences, or in hypothetical extensions of our experiences (which underlies so much of what we think of as making humans unique). Computers don't have those life experiences--although they do have "training" in some way or another.

I think the bigger question is "Does it matter?" And this is because the Chinese Room, as a computer, is doing exactly what it's supposed to do. It doesn't need to have human life experiences in order to produce appropriate outputs. And I think that's fine.

SITNHarvard136 karma

Dana here: Great points, Kevin.

Great question! For those who may not know, the Chinese Room argument is a famous thought experiment by philosopher John Searle. It holds that computer programs cannot "understand," regardless of how human-like they might behave.

The idea is that a person sits alone in a room, and is passed inputs written in Chinese symbols. Although they don't understand Chinese, the person follows a program for manipulating the symbols, and can produce grammatically-correct outputs in Chinese. The argument is that AI programs only use syntactic rules to manipulate symbols, but do not have any understanding of the semantics (or meaning) of those symbols.

Searle also argues that this refutes the idea of "Strong AI," which states that a computer that is able to take inputs and produce human-like outputs can be said to have a mind exactly the same as a human's.

Burnz515083 karma

Who's paying your tuition, car insurance, everyday food money, etc. Who's funding your life?

SITNHarvard173 karma

Many of us have funding through the National Science Foundation, The Department of Energy, The Department of Defense, The National Institutes of Health....

tl;dr - SCIENCE FUNDING IS IMPORTANT!!!!!

SITNHarvard134 karma

Harvard

bmanny56 karma

Has anyone used machine learning to create viruses? What's stopping someone from making an AI virus that runs rampant through the internet? Could we stop it if it become smart enough?

Or is that all just scary science fiction?

SITNHarvard110 karma

People use machine learning to create viruses all the time. There has always been a computational arms race between viruses and antivirus software. People that work in computer security don't mess around though. They get paid big bucks to do their job and have some of the smartest people around.

Crazy people will always do crazy things. I wouldn't lose sleep over this. Security is always being beefed up and if it's breached we'll deal with it then.

Rockwell

bearintokyo55 karma

Will it be possible for machines to feel? If so, how will we know and measure such a phenomenon?

SITNHarvard109 karma

By feel I'm assuming you're referring to emotion. It'd be controversial to say that we could even measure human emotion. If you're interested in that stuff, Cynthia Breazeal at MIT does fantastic work in this area. She created Kismet, the robot that could sense and mimic human emotions (facial expressions may be more accurate).

http://www.ai.mit.edu/projects/humanoid-robotics-group/kismet/kismet.html

-Rockwell

bearintokyo31 karma

Wow. Interesting. I wonder if AI would invent their own equivalent of emotion that didn't appear to mimic any human traits.

SITNHarvard103 karma

Kevin here: I think an issue is what the purpose would be. Given our brain's "architecture," emotion (arguably) serves the function of providing feedback for evolutionarily beneficial behaviors. Scared? Run away. Feel sad? Try not to do the thing that made you feel sad again. Feel content? Maybe what you just did is good for you. (although recent human success & decadence might be pushing us into "too much of a good thing" territory...)

What function would emotion serve in an AI bot? Does it need to feel the emotion itself? Or is it sufficient for it to recognize emotion in its human interdictors and to respond appropriately in a way that maximizes its likelihood of a successful interaction?

kilroy12346 karma

Do you think advancements in AI / machine learning will follow Moore's law and exponentially improve?

If not, in your opinion, what needs to happen for there to be exponential improvements?

SITNHarvard115 karma

AI actually hasn't improved that much since the 80s. There is just a lot more data available for machines to learn from. Computers are also much faster so they can learn at reasonable rates (Moore's law caught up). I think understanding the brain will help us improve AI a lot.

-Rockwell

Swarrel40 karma

Do you think A.I will become Sentient and if so how long will it take? -Wayne from New Jersey

SITNHarvard173 karma

Can you convince me right now that YOU are sentient?

Rockwell

(To answer your question, my personal metric for robot sentience is self deprecating humor as well as observational comedy by the same Robot in one comedy special)

ohmeohmy3739 karma

Will we ever be able to merge our own intelligence with machines? Can they help us out in how we think, or will they be our enemies, like everyone says?

SITNHarvard55 karma

William here: This is currently happening! A couple examples: chess players of all levels make extensive use of chess machines/computers to aid their own training/preparation, AI platforms like Watson have been deployed all over the healthcare sector, predictive models in sports have also been taking off recently. Generally speaking, we make extensive use of AI techniques for prediction and simulation in all sorts of fields.

nicholitis37 karma

When any of you meet someone new and explain what you do/study, do they always ask singularity related questions?

What materiel would you point a computer science student towards if they were interested in learning more about AI?

SITNHarvard29 karma

Thanks for the question! We put some resources at the top of the page for more info on getting into machine learning. It is a pretty diverse field and it is changing very rapidly, so it can be hard to stay on top of it all!

MCDickMilk34 karma

When will there be AI to replace our congressmen and other (you know who!) politicians? And can we do anything to speed up the process?

SITNHarvard52 karma

Politics, ethics, and the humanities and liberal arts in general will be the hardest thing for AI to replace.

Rockwell

APGamerZ23 karma

Two questions:

1) This is probably mostly for Dana. My understanding of fMRIs is limited, but from what I understand the relationship between blood-oxygen levels and synaptic activity is not direct. In what way does our current ability in brain scanning limit our true understanding of the relationship to neuronal activity and perception? Even with infinite spatial and time resolution, how far would we be from completely decoding a state of brain activity to a particular collection of perceptions/memories/knowledge/etc?

2) Have any of you read Superintelligence by Nick Bostrom. If so I'd love to hear your general thoughts. What do you make of his warnings of a very sudden general AI take-off? Also, do you see the concept of whole brain emulation as an eventual inevitability as is implied in his book with the increases in processing power and our understanding of the human brain?

Edit: grammar

SITNHarvard29 karma

Dana here: So, fMRI infers neural activity by taking advantage of the fact that oxygenated blood and deoxygenated blood have different magnetic properties. The general premise is that you use a network of specific brain regions to perform a task, and active brain regions take up oxygen from the blood. Then to get more oxygen, our bodies send more blood to those parts of the brain to overcompensate. It's this massive overcompensation that we can measure in fMRI, and use to determine which brain regions are actively working to complete the task. So this measure is indeed indirect - we're measuring blood flow yoked to neural activity, and not neural activity itself.

But although the BOLD signal is indirect, we are still able to learn a lot about the information present in BOLD activity. We can use machine learning classification techniques to look at the pattern of responses across multiple voxels (3D pixels in the fMRI image) and decode information about the presented stimuli. Recently, neuroscientists have also started using encoding models to predict neural activity from given the characteristics of a stimulus, and thus describe the information about a stimulus that is represented in the activity of specific voxels.

However, this is all operating at the level of a voxel - and a single voxel contains tens of thousands of neurons!

APGamerZ7 karma

Interesting, thanks for the response! A few followup questions. If the encoding models operate at the voxel level, how does that limit the mapping between stimuli and neural activity? If each voxel is tens of thousands of neurons, is there fidelity that is being lost in the encoding models? And does perfect fidelity, say 1 voxel representing 1 neuron, give a substantial gain in prediction models? Do you know what mysteries that might uncover for neuroscientists or capabilities it might give to biotech? (I assume 1 voxel to 1 neuron is the ideal or is there better?)

Is there a timeline for when we might reach an ideal fMRI fidelity?

SITNHarvard15 karma

We're definitely losing fidelity in our models due to large voxel sizes. We're basically smearing neural "activity" (so far as that's what we're actually recording with fMRI, which as we've discussed isn't totally true) over tens of thousands of voxels. So our models will only be accurate if the patterns of activity that we're interested in actually operate on scales larger than the voxel size (1-3 mm3 ). Based on successful prediction of diagnoses based on fMRI activity (which I wrote about previously for Science in the News), this is almost certainly true for some behaviors/disorders. But getting to single-neuron level recordings will be super helpful for predicting/classifying more complex behaviors and disorders.

For instance, this might be surprising, but neuroscientists still aren't really sure what the motor cortex activity actually represents and what the signals it sends off are (for instance, "Motor Cortex Is Required for Learning but Not for Executing a Motor Skill"). If we could record from every motor cortical neuron every millisecond during a complex motor activity with lots of sensory feedback and higher-level cognitive/emotional implications, a predictive model would discover so much about what's being represented and signaled and when.

For fMRI, we're down below 1mm resolution in high magnetic field (7T+) scanners. There's definitely reason to go even smaller - it'll be super interesting and important for the rest of the field to see how the BOLD (fMRI) response will vary across hundreds or tens or single neurons. Maybe in the next 10ish years we'll be able to get to ~0.5mm or lower, especially if we can develop some even-higher field scanners. But a problem will be in dealing with all the noise--thermal noise from the scanner, physiological noise from normal breathing and blood pulsing, participant motion.... Those are going to get even hairier at small resolutions.

SITNHarvard9 karma

As far as fMRI goes, I think Kevin's answer (below) gets to the point. We are measuring a signal that is blurred in time and space, so at some point increased resolution doesn't help us at all - and even lowers our signal-to-noise ratio!

SITNHarvard6 karma

Kevin here: Dana's response is really good. fMRI is inherently limited in what it'll be able to tell us such it is an indirect measurement of brain activity. Additionally, improving spatial and temporal resolution is helpful in fMRI, but at a certain point we're limited by the dynamics of what we're actually recording - since the BOLD response is slow and smeared over time, getting too much below ~1 second in resolution won't give us much additional information (although there definitely is some information at "higher" frequencies).

So it's really important to validate the method & findings with other methods, like optical imaging (to measure blood oxygen directly), electro-/magnetoencephalography (to measure population-level neural activity), sub-scalp EEG (for less noise--but this is restricted to particular surgical patients), and even more invasive or restrictive methods that can only be used in animal models. For instance, calcium imaging can now record from an entire (larval) fish brain, seeing when individual neurons throughout the brain fire at great temporal resolution.

nginparis16 karma

Have you ever had sex with a robot? Would you want to?

SITNHarvard38 karma

3Dpaper10 karma

studied industrial design and I'm very interested in AI and machine-learning. What would be your suggestions on how to begin to learn to utilize and get involved in the AI and machine-learning without having a background in programming/computer science/software engineering?

Learning a programming language is a start (starting to learn some python), but I don't know really know a path beyond that.

SITNHarvard8 karma

Thanks for the question! We put some links at the top of the page for more information! Keep on going!

ezzyrd6 karma

What do you like most about what you do?

SITNHarvard18 karma

Adam here:

I really like working on problems that are going to help others, and I think that science and research is the best way to have a positive impact on others' lives.

In my field in particular, we are working with data that is open and available for anyone to use. (Genetic sequence data). This data has been available for years, but we as researchers have to be creative in how we use it. A la Rick and Morty: "...sometimes science is more art than science..."

With the advances in machine learning, you can dream up a new model or idea, and implement it later that day. The speed at which you can turn your ideas into code is amazing and so much fun to do.

G06155 karma

How long do you think it will take to make AI like jarvis or Friday from the avengers/Spiderman movies?

SITNHarvard16 karma

Adam here:

I think we are getting rather close to personal assistants we can chat with that will do our [menial] bidding. Amazon is currently holding a competition for creating a bot you can converse with. And when there is money behind something, it usually happens.

Moreover, there are already a few digital personal assistants out there you can purchase (Amazon Echo, Google Home, Siri). (They can all talk to each other too!) Soon enough these will be integrated with calendars, shopping results (where they can go fantastically wrong), and even more compilcated decision making processes.

byperheam4 karma

What's the best route academically to get involved with AI in the future?

I'm in community college still and I'm going for an AA in computer science, but I'm also interested in getting an AA in psychology due to the concept of working within the neuroscience/AI field in the future.

SITNHarvard12 karma

Adam here:

Honestly, I think having a strong mathematical background is really important for being "good" at machine learning. A friend once told me that machine learning is called "cowboy statistics": machine learning is essentially statistics, but with fancy algorithms. (I think it is called this too because the field is so new and rapidly evolving, like the Wild West.) Too much I think machine learning gets hyped up, while basic statistics can many times get you pretty far.

I would also advocate pursuing the field you are passionate about--neuroscience and psychology sound great! It doesn't do much good to model data if you don't know what it means. Most of us here have a specific problem that they find interesting and apply machine learning methods to it. (While others do work too in the pure machine learning field; that is always an option.)

tl;dr: Math and your field of interest.

Avander3 karma

So CNNs were popular, then residual CNNs, now generative adversarial networks are the cool thing to do. What do you think is coming up next?

SITNHarvard6 karma

Interesting! Personally, I think that convolutional neural networks are here to stay, and they are only going to get much more important in the future. In particular, dilated CNNs I think are going to edge out RNN-based models for sequence analysis. They are faster, use less memory, and can be optimized for GPU architectures. They have done some cool stuff in machine translation and generating audio.

reid84703 karma

In terms of AI's application w/ nanorobotics in medicine, do you know anything about nanobots being used as a sort of tool in AI diagnosis of health conditions? I'm wondering about the different applications of AI here--would that method be more useful for diagnosing brain health than whatever we have now?

SITNHarvard10 karma

Kevin here: not super up on nanobots or neural dust, but they'll absolutely be useful in terms of diagnosing brain health. Because our methods right now are pretty crude and indirect for most disorders anyway, so nanobots won't even need to be that good in order to be helpful.

What I mean by that is that, for instance, brain imaging like MRI can show us some things, but only if the scan is sensitive to whatever it's measuring. So large contrasts in tissue density? Yep, MRI's pretty good at that, so we can find (big) tumors OK. Brain activity? eh, fMRI can basically see what (large) brain areas are using up oxygen, but it's not specific enough (or high enough resolution) to tell us much diagnostically. Specific chemical or neurotransmitter concentrations in small brain areas, or actual brain activity? lol we try, but we're still pretty far off. so nanobots will be super useful in telling us extremely sensitive, highly localized information about the brain.

SITNHarvard7 karma

When I think of AI in diagnostic medicine I actually don't think of nanobots (I don't know much bout nanobots myself). I think of a machine that has learned a lot about different people (e.g. their genomes, epigenomes, age, weight, height, etc) and their health and uses that information to diagnose new patients. This is the basic idea behind personalized medicine and it's making great progress. You can imagine a world where we draw blood and based on sequencing and other diagnostics the machine will say "you have this disease and this is the solution for you". It happens a bit already.

Rockwell

Blotsy3 karma

Hello! Deeply fascinated with AI, thanks for doing an AMA.

What is your take on the recent development of deep learning structures developing their own languages without human input?

SITNHarvard3 karma

Adam here:

Thanks for asking! I think I answered the question here. Hopefully that clears it up a bit!

mercs2crazy3 karma

Does the AI have the capability to choose to do, or not do something ,based on its own observation? Or unless it's coded into the AI to make those choices. Otherwise, does the AI have the freedom to choose? Or are it's choices made already based on algorithms?

SITNHarvard14 karma

Here is the basic gist of how most AI "learns".

First you choose a task that you want your AI to perform. Let's say you want to create AI that judges court cases and gives a reason for it's decisions.

Second, you train your AI by giving examples of past court cases and the resulting judgements. During this process, the AI will use all the examples to develop a logic that's consistent among all the examples.

Third, the AI applies this logic to novel court cases. The scariest part about AI is that in most cases we don't really understand the logic that the computer develops; it just tends to work. The success of the AI depends heavily on how it was trained. Many times it will give a decision that is obvious and we can all agree on, but other times it may give answers that leave us scratching our heads.

There are other types of AI in which you simply program the logic and/or knowledge of a human expert (in this case a judge or many judges) into a machine and allow the machine to simply execute that logic. This type of AI isn't as popular as it used to be.

I hope this sort of answers your question.

Rockwell

fantasystories2 karma

Thank you for this opportunity.

So, my first question is, will AI learn how to write books? By books I mean fiction like Game of Thrones, Pride and Prejudice or Harry Potter. If yes, when do you expect it to happen? Now that AI can learn from examples, can it learn to write? And will it surpass people at it? Is writer's job in danger?

Another question I have is, why do you think we are not in danger of AI taking control like in science fiction? Do you assume we are far from achieving such level of AI sentience? Do you disagree with Paperclip thought experiment or is there some other reason for why you find it unlikely?

If we exclude religious and similar arguments, how likely could AI achieve levels of sentience and intelligence to take control and defeat humanity?

SITNHarvard3 karma

Kevin here - so the other questions have been addressed at least in part elsewhere in the comments, so I'll focus on the first one.

AI will absolutely be able to write books. In fact, it's already writing poetry that is indistinguishable from human-authored poetry.

Complete novels will be tougher since they have a lot more structure, coherence, and recurring elements. But with the building blocks in place of being able to artificially create sensible-sounding prose, it won't be long before full novels can be AI-written.

But an important question for all art--music and visual art are other frontiers for AI--is how we choose to value them. Beyond the aesthetics of art (which AI can replicate), we highly value the meaning of art, which comes from morality and ethical purpose, situational experience, and other human aspects. I'm not sure I'd love "Dark Side of the Moon" so much if it wasn't motivated by the gut-wrenching loss of a friend and collaborator to his own inner demons, for example.

CaptainInertia2 karma

I was listening to a Sam Harris podcast, and the guest said that if AI became truly AI/sentient they would essentially be a "person". He also said that fearing and trying to limit or shackle these "persons" would be racist.

What do you think about his opinion?

Edit; the guy is David Deutsch

SITNHarvard3 karma

Adam here:

To be entirely honest, I don't know how I feel about this issue, and so I don't have much of an opinion. For now, I don't think we have to worry about it with our current algorithms.

However, the closest thing I can relate to is the White Christmas episode of Black Mirror. Without any spoilers, they make an artificial copy of someone's "self", host it on a computer, and essentially torture it until it complies. (You should all watch this episode!)

I end up feeling bad for the AI in the show, but I know it is not real, only a simulation. So only time will tell.

whatsthatbutt2 karma

Which job sectors will be partially replaced? and which job sectors will be entirely replaced?

SITNHarvard5 karma

Adam here:

Great question! I have run across this website that gives a quick estimate about how probable a job will be automated.

joemaniaci2 karma

One fear is of AI escaping, but if we ever achieved a true AI it would probably be on extremely powerful and dedicated hardware. Wouldn't it cease to function in the wild?

SITNHarvard7 karma

Adam here:

Great question! Computers are getting increasingly powerful, smaller, and almost everywhere. The smartphone in your pocket is more powerful than the machines that got us to the moon in the first place.

Google is already using the millions of Android smartphones available to power their algorithms. Google Search is an extremely complicated algorithm that helps answer the questions you ask (almost like magic, it seems.) Instead of having one super computer to train a machine learning algorithm for their search engine, they have recently taken a different approach.

1) Put the algorithm on users' phones via the internet, where users use the search function. (i.e. provide training data)

2) Given a single user's searches, the algorithm makes a tiny update to the algorithm for that user. This then happens for every Android user: this is oodles and oodles of parameter updates. (oodles being a very scientific term)

3) Those results are then sent back to Google and averaged, and the entire model is updated.

4) The improved search engine algorithm is then sent back to the user, and the process repeats.

In this way, Google can utilize every single customer's phone to do some tiny amount of computation that helps train some larger model--Google Search.

This is just one example in the wild of distributed machine learning that is not done on a supercomputer, and I imagine this will be more important in the future.

Swarrel1 karma

Do you think A.I or Robots will ever develop emotion? I think that's what most people are trying to get at when they ask if they can ever be creative.

After all that's what separates humans am I right, being able to have emotion with high intelligence?

SITNHarvard3 karma

William here: It depends on what we mean by emotions. If we are looking for robots to hope, to feel, to experience joy or sorrow, the answer will depend on your own stance on how attainable artificial sentience or consciousness is, something that is still an open question. At the moment, AI is still very dependent on programmers and developers to provide direction, but there are examples of AI developed for gaming that show behavior that could be interpreted as wrath (taking revenge on a player even when it is irrational) or altruism (helping a player at personal expense) for narrative reasons. It is also often useful for public facing AI, such as customer service platforms, to have the ability to express empathy, etc. Most people would argue that outwardly displaying emotion and having emotion are distinct, however.

MightyMrFish1 karma

Given what is currently known about AI and the potential you all see for AI in the near future, are there any media depictions of AI that you see as being more accurate than others? How would you write an AI using what is known?

SITNHarvard11 karma

Dana here: The internet certainly has a ton of opinions about media representations of AI.

Personally, I don't find a lot of the dystopian sci-fi representations particularly compelling. My two favorites:

In the movie Her, the main character falls in love with his phone's operating system (think a far more advanced version of Siri). This is a long way off from our current voice recognition and speech synthesis algorithms, but it seems plausible in the next few decades.

My second favorite example is the movie Robot and Frank. It's set in the near future when people start using robots in their everyday lives. Frank is an elderly man, and his adult son buys him a robot to help out around the house. The film explores the social implications of our interactions with robots.

My husband is a mechanical engineer / roboticist, and he thinks a realistic example is Jarvis from Ironman, because it was build for a general purpose of helping someone, but has the ability to be customized for a specific purpose like driving the suit.

HalyAThk1 karma

How many females are in your class?

SITNHarvard1 karma

My class has six females and six males.

Rockwell

dankestmemelord691 karma

I am a rising freshman at a small liberal arts school. Is AI being explored in non-human animals?

SITNHarvard6 karma

I'm not sure I understand the question.

APGamerZ2 karma

Not completely sure, but I believe he might have meant developing AI models of animals (like robotic dogs). Perhaps, he is thinking about the fact that it would seem that modeling a less intelligent animal's behavior completely would be a precursor to modeling a general human intelligence? How far are we from creating a realistic emulation of an ant or something more biologically simplistic, like a single-cell organism?

[deleted]1 karma

[deleted]

SITNHarvard10 karma

[This answer has been redacted by DARPA-BOT.]

TheInternetLegend1 karma

How does it feel being inferior to AI in every single way?

SITNHarvard1 karma

At least we CAN feel.

notimetologout-3 karma

Why are you so smug to think you have better answers than me?

SITNHarvard4 karma

Because I used to live in San Francisco - Kevin