Hi everyone,

I'm a PhD researcher in neural interfaces. Feel free to ask me about neurotechnology in general or about the state of research for invasive neural interfacing sensors.

I'm really excited about this technology as it has great potential and will gladly expand on where I see the field of brain-computer interfaces going in the future.

I'm also aware there are a lot of ethical concerns/fears regarding this topic, which I'm happy to discuss as well.

Comments: 1169 • Responses: 47  • Date: 

HighQueenOfFillory404 karma

How did your career escalate from your degree? I'm doing a Neuroscience undergraduate, but I have no idea how to climb the ladder to a really good job once I leave uni. I'm supposed to be going on a research year abroad in September but because of COVID I might not get to go and then leave uni with no experience.

nanathanan506 karma

I did an undergraduate degree and first master's degree in electronic engineering and nanotechnology. (MEng)

I then did a research master's degree (MRes) in 2-dimensional materials (graphene, 2D transition metal dichalcogenides, etc). While doing my master's project in graphene biosensors for the brain I got interested in neural interfaces.

For my PhD work, I then focused on designing a novel neural interface that could tackle the key problems with the existing technology. These were: biocompatibility (affecting operable liftime of the sensors in the brain), data quality (single neuron addressability, signal type, and computational value for external applications), and data quantity (the number of sensors in a given area or for a given implant).

Now as I finish my PhD I hope to further develop and eventually someday sell my sensors, and therefore entrepreneurship is the way forward for me.

Fantastic that you are interested in neuroscience - there are plenty of exciting fields that you can further specialize in. Have you considered doing a master's degree? With a bachelor's I'm not sure what the career prospects are, to be honest. I never intended to enter the workforce at this stage of my education, so I never properly researched the opportunities. I'm sure there's plenty out there, but I'm not the best person to answer this particular question.

5551212nosoupforyou85 karma

This might seem like a silly question, but if you didn't expect to enter the workforce, what did you expect to do? As a person that somehow parlayed a 2 year associates degree into an engineering position, I am fascinated by the career paths that were available to people who continued education after a bachelors degree. And a follow up follow up, how have you been supporting yourself through, what, 10 years of post-secondary education?

nanathanan194 karma

I always intended to become an entrepreneur. I didn't really want to work on other people's projects, as I always had many ideas of my own. I've always felt that a good business is the best way to bring new technology to the world.

I have been in university education for almost 10 years, yes. Luckily I come from a country that paid for some of it and I have a government-backed zero-interest loan for the rest of my student fees. Studying mostly in Europe as a European, tuition fees for me never exceed $5k per year. For my PhD, I'm part of a fully-funded program and I get paid a salary.

mutandis5 karma

Graphene flagship (Barcelona) research by any chance? Just wondering as I used to work on something similar.

nanathanan3 karma

Close guess actually!

YT__4 karma

You said the sensors you develop are invasive. Have they been tested? What testing is even required? Do you have papers you've published that you can link/provide?

nanathanan2 karma

I haven't published my work yet and I'm currently just testing the electrical properties of my sensors (very very early stage). The type of sensors I've made is designed to couple directly with neurons, so they would need to be invasive if ever tested in a living brain.

TheNewRobberBaron3 karma

Hey there! Incredibly interesting AMA. What exactly are your sensors capable of, and to whom do you plan on selling them to?

nanathanan11 karma

I sadly can't go public with that information yet. I plan on selling my sensors to other businesses (B2B) as I only develop a couple of parts of the full technology stack - alone they can't be used.

Hyakuman2 karma

Do you work directly with neuroscientists or is all techs? I studied neuroscience undergraduate years ago. I wanted to do PhDs like yours but my lack of engineering or tech skills always kept me out.

nanathanan6 karma

Neuroscientists will always be needed int he field. We make the tech to enable the recording of signals, but a neuroscientist is needed to make sense of it. If you're still interested, go for it!

thelolzmaster132 karma

I recently read the Neuralink white paper and it seems they’re at 10x the previous SOTA in sheer number of probes as well as having built a robot to perform the implant operation, custom electronics, materials, and software. With the amount of funding they presumably have do you think anyone in academia is able to compete on the problem? Are you aware of any other big players in the BCI space? I get the sense that there is very little real work being done in the area despite its significant applications. Is this because it is early in its development?

nanathanan178 karma

Neuralinks' claims are somewhat true, but only if represented correctly. That sheer amount of sensors has been done many times before, but without comparable biocompatibility and without the analog-to-digital conversion (ADC) chip. Their true industrial innovation is the ADC and the robotic arm for implanting the sensors, everything else is fairly established technology already done by countless research groups around the world. what Neuralink has excelled at, is bringing some of the best technology from publicly available research and putting it all into a viable device. (I'm a massive fan)

Academia is sadly not focussed on making a commercially available neural interface. Research for neural interfacing in academia in my opinion is very slow because there's no common goal or objective. It s actually disappointing to see how little makes it through from the world of research to clinical applications. The clinically approved multi-electrode array devices supplied for clinical purposes by companies such as Blackrock microsystems are almost 30 years old and extremely rudimentary compared to what we can do today. The industry is ripe for innovation and technology translation.

There's a huge amount of companies in this space - current investment in neurotech around the world is in the order of about $2billion and currently growing by 10-20% a year. There's a great deal of work being done and its about to blow up in the coming decade. Most exciting companies, in my opinion, are: BIOS, Neuralink, and Kernel.

For a full view of the neurotech industry, this report is quite good (Although, sadly, the author doesn't quite capture the value of invasive sensors):

: https://www.idtechex.com/en/research-report/invasive-and-non-invasive-neural-interfaces-forecasts-and-applications-2018-2028/573?fbclid=IwAR28NXrToSYQtoc1tOc5vO2BCm-igud1h9TM_6l6CsrZdIKHKBN-qzybFyw

thelolzmaster24 karma

Thank you for the fantastic reply. I have some follow up questions. What are the main bottlenecks in BCI technology today? If it's not the number of probes is it simply the biocompatibility? Is it the software? Is it the signal processing? What are the landmarks on the way to BCI in clinical use in your opinion?

nanathanan44 karma

The two main bottlenecks for the technology are bandwidth and biocompatibility. This boils down to data quality, sensor density, and sampling frequency. These problems are being solved at a rapid pace though.

The real issues that I see for this technology is funding and regulations. Investors are scared to invest in it, because it's deep tech, poorly understood, and considered early-stage and high risk. Form a regulatory standpoint it's difficult to get new devices clinically approved due to the materials they use.

illmaticrabbit9 karma

Edit: oops posted before seeing OP’s reply

Adding on to this, I’m curious whether OP is willing to talk about the advantages and disadvantages of their device relative to Neuralink’s technology.

I’m also curious about how the technology being developed in academic labs measures up to Neuralink’s technology. In 2018 I went to a conference focused on new technology in neuroscience and I remember a handful of groups there working on fiber electrodes / miniaturized electronics, but I’m not sure how they measure up to Neuralink’s inventions.

Also, not to derail the conversation, but I feel like Elon Musk makes an ass out of himself by making the author list for that paper “Elon Musk, Neuralink”.

nanathanan10 karma

Neuralinks innovation is not in the sensors. The type of sensors they use have been made around the world by research teams for half a decade already.

Researchers tend to look into things which are far from ever seeing commercial application. Like optogenetics and optical neural probes for example; I feel that there are way too many regulatory hurdles to bring that to market in this century, but sadly so many researchers want to use it.

krasovecc104 karma

Do you feel like the technology where "your brain is downloaded and turned into AI" will ever actually exist, making "humans" immortal? Not sure if this is similar to the field you work in... sorry if it isn't.

nanathanan233 karma

Simple answer: No / Maybe, but it depends on your definitions and what you're imagining.

To clarify, let's unpack the question a little bit:

'Downloading your brain'

This isn't the main goal for the technology, but there are of course people interested in pursuing this. The interconnectivity of the brain is computationally demanding to model, and as far as I know, beyond our current computing power to map the entire human brain. Adding to that, the real benefits of BCI's is to allow humans further to utilize computing power that's not native to the brain (e.g. increased memory and numerical proficiency). It doesn't make sense to take what works well in the brain and replicate it in a computer where it doesn't work as well. However, with a connection between the brain and a computer, the idea is that a human brain can benefit from the additional information handling and storing capabilities of a computer.

A more sensible goal is that humans use their brain-computer interface to improve their cognitive abilities, much like what we already do today, but with a far greater bandwidth. By bandwidth, I'm referring to the quantity and speed of information transfer.

'Turning your mind into AI'

An enhanced brain wouldn't fit the definition of artificial intelligence - if it would, where do you draw the line? Our brains are already technically enhanced by technology as we already store information and perform tasks with external technology.

However, like you asked if we model a human brain on a computer, would this be AI? Well, it depends on your definition of AI. Using a dictionary definition, a computer that can perform the cognitive tasks of a human is an AI. This definition may need updating in the future when computing systems can perform the cognitive tasks of a human without the need to perfectly model the operation of a human brain. such a system would be a very different AI and perhaps more true to the meaning of 'artificial'.

Kleindain68 karma

I’m curious on how your IP is shared/managed between your institution and yourself (given you mentioned entrepreneurship). How close is your PhD work and your own work? Presumably there is some form of contract in place?

nanathanan41 karma

The IP generated during my Ph.D. will be owned by me, but I will eventually have a profit-sharing contract with my University.

My Ph.D. work is the foundation of the startup, but not all of my work for the startup is related ot my Ph.D.

Dr_SnM12 karma

So you have a pretty unique arrangement with your institution because that is far from typical.

Are you sure this is correct?

nanathanan3 karma

It's an unusual arrangement, but not unique. Most universities will have a commercialization arm that can be negotiated with. In many cases, you can either lease your IP rights, buy them back, or gain rights in exchange for a share of royalties.

brisingr02 karma

What university do you work at where they give you 100% of the IP?? Im genuinely curious. I do in vivo ephys too

nanathanan3 karma

IP ownership depends on the grants that fund your work in addition to your Universities policies. My university does not grant me ownership of my future IP. I've negotiated with my university to have a royalty sharing contract in exchange for gaining rights over my work.

siensunshine61 karma

Thank you for your contribution to science! Where can we read about what you do?

nanathanan89 karma

Thanks!

I'm currently still patenting my work before publishing. When I go public with my tech in a year or so from now, I will update this post.

isuckwithusernames33 karma

You’re a current PhD student? Is the work you’re going to publish based off your grad research? How are you handling the conflict of interest? Are you sharing the patent with the school? If not, how are you legally doing invasive research?

Edit a word

mcquotables17 karma

Until published this sounds like a bunch of baloney.

Also I hope they have a good attorney because they're going to have a rude awakening when they realize all work done at their University or using University material is owned by the University.

isuckwithusernames8 karma

Yeah I think it’s all bullshit. He doesn’t describe any processing or technical details. Everything he says can be found on Wikipedia. But yeah the most obvious is his claim of somehow controlling who gets the IP. Human subject testing is really expensive. Invasive testing significantly more so. And the regulations are just crazy. If he thinks his university is going to pay for all that research and get nothing out of it, he’s nuts.

nanathanan6 karma

haha, i think you're jumping ahead there a bit.

Not tested in humans yet, that would cost a lot of money. Like any other medical device company that spins out of a university, I will need to raise several rounds of funding to progress through the many stages of clinical trials and the stages that lead up to them.

My sensors are at an early stage and still just being tested for their electronic performance with cultured neurons, brain slices, and eventually mice. This is what is feasible with my current resources and time. Of course, after I graduate I'd hope to continue developing my sensors.

Adiwik32 karma

So how long before we can get this interfaced with VR?

Edit, I mean we can already use accelerometers around our ankles and wrists but I still don't see anybody pushing that out on the market because they believe maybe laser scam it's better but it's not one to one

nanathanan48 karma

In order to use neural interfaces with commercial technology, it would need to reach the stage where anybody can get an NI implanted into their brain without any real risk. There are a number of companies working on this.

So the timeline for this is the same timeline as NIs reaching commercial viability, which is about 5-10 years for certain medical devices, and possibly 10-15 years or more for non-medical devices.

xevizero6 karma

What would be the practical applications of this? Would you really be able to see VR without and headset for example? Or feel sensations in the game?

MillennialScientist8 karma

Sadly, no. In 5-10 years, you could use a neutral interface to replace a few controller inputs, but it would probably have a 10-20% error rate. You might be able to do things like detect when someone attention gets diverted by a sound and direct the VR to that stimulus, but there are probably easier ways to do that too. Right now the field is a little stuck figuring out what can be done with this technology that cant simply be done better with a simpler technology, for someone who is not completely paralyzed.

nanathanan10 karma

This is only true for non-invasive BCI's like EEGs. Invasive neural interfaces will have a great deal more functionality than that.

Tenyo26 karma

Is there any reason to think that once this technology is in the hands of businessmen who will do anything for money and governments who took 1984 as a How-To guide, it won't be used for mind control?

nanathanan26 karma

I've come across many fo these questions and hypothetical scenarios, as people have very fertile imaginations regarding the topic fo brain-computer interfaces. The reality is far different from what we see in popular culture. I find related episodes of Black Mirror particularly ridiculous and farfetched, so I'll try to explain why.

It's not at all trivial for BCI's to be used for that purpose. Not just with today's technology, but just due to practical realities of the field. Stimulating and sensing the brain requires sensors implanted in very specific regions of the brain. Anything hoping to use these sensors then needs your co-operation to extensively train a computer to make sense of what you are thinking in that specific region of your brain - it's not transferable from person to person. It's also likely that implanting sensors into certain areas of the brain may become restricted/illegal without medical reasoning. All this while assuming that people are somehow not going to enforce regulation and data protection laws around such devices.

So in order to get unwanted information entering your mind through a neural interface, you'd first have to have surgery to have it implanted in a part of the brain where it could actually affect your executive decisions (red flag 1), then you'd need to train a computer exactly what you're thinking when that part of the brain is active (this would already imply consent), and then you would need a political environment and co-operating companies that actually allow this to happen (you're likely to be having other problem than mind control in this scenario).

I genuinely don't see neural interfaces ever working in a fashion where external entities can influence your mind through them without your consent. This wouldn't only be somewhat impossible practically/realistically, it would be an infringement of rights and I don't know anybody in the neurotech community who would want to allow such a thing. Just like with any powerful new technology, neural interfaces will need to be tightly regulated.

It's important to also clarify, the sensors intend to improve the bandwidth of information going in and out of your brain, don't actually change the nature of that information. People's minds are already being controlled and influenced by external factors and people can attempt to control that to some limited capacity, but not much will change in that regard with improved bandwidth.

mmmmmjjjrrrrr23 karma

I know that brain is flexible so it develops itself to be able to process that information fairly easily [at least for our humans].

How do they develop computer algorithms that can receive signals and get meaning out of it? [does our brain also send signals to your computers?]

I need some insights or ideas to understand that concept.

nanathanan32 karma

For a more ELI5 answer: When neurons communicate amongst each other, we can measure a change in the electrical potential near them. Depending on what type of sensor one uses, the information recorded tells us different things. Generally, the sensor will tell us that there is activity in a given area of the brain. We can then correlate activity in a given area of the brain with a certain thought or function fo the human, and rectify meaning form these correlations.

nanathanan3 karma

It depends on the type of signal being recorded and the sensor technology being used. See: https://www.sciencedirect.com/science/article/pii/S0896627308008970

The computing also varies depending on the signal type. For EEGs, see the following:

https://pubmed.ncbi.nlm.nih.gov/19229240/

SevenCell20 karma

When you mention augmenting human capacity through BCIs, say to allow greater proficiency in maths, surely that presumes some high-level capacity to interpret brain signals as semantic thought?

If I want the answer to 2 + 7, how close are we to distinguishing the thought "2" from any other thought? How close is this to the thought "7", or any other number? How uniform is this across people?

A lot of this stuff has always seemed fanciful to me, but I'd love to be wrong.

nanathanan18 karma

Well with a good sensor, good data, and a decent software/ANI to make sens eof it, it shouldn't be too difficult to make sense of the neuronal activity in this manner.

It' has already been proven possible to put sensors into the motor cortex and amputees/paralysis patients have been able to control a robotic arm as if it were their own. There's actually another IAMA where a team describes this procedure: https://www.reddit.com/r/IAmA/comments/aj3g0t/i_am_a_paralyzed_man_who_regained_control_of_his/

This was done with an extremely rudimentary sensor, poor data, and without the machine learning used today, Impresvviely they managed this feat with just a team of experts and a few afternoons of training. So with improved tech that already exists today, I don't see other or more complex functions being inaccessible.

BUTT_SMELLS_LIKE_POO19 karma

I'm an AI Software Engineer (very early in my career) with a lot of interest in neuroscience, so your replies have been a pleasure to read so far!

  1. Reading your current replies, it seems like the sensors you're working with perform the function of relaying signals from the brain - how difficult would it be to send signals to the brain instead? I'd imagine the issue would be less to do with physically sending signals, and more with sending them in a useful way that our brains could interpret?

  2. Have you considered employing any AI architectures to help interpret the outputs you get from a brain? No idea if it would work, but it would be cool to see if anybody has tried a simple classifier or something - i.e. get readings from your sensors while showing someone images of a set of distinct objects, and use that data to train a classifier, then see if it can ultimately identify what object is being seen without explicitly being told the answer (like it would be during training).

Very cool AMA, would love to transition to this field if things continue moving in the exciting directions they have been! Thanks!

nanathanan9 karma

  1. I haven't tested stimulating neurons with my sensors yet. TBC. The usual problem with stimulating is that it causes cell damage to nearby neurons, so even if my sensors could do that, they are unlikely to do it very effectively.
  2. I don't currently test my sensors in a brain and therefore don't have this data. However, if and when, then absolutely. Machine learning will be the go-to strategy for making sense fo the recovered data. People doing their PhD's in computational neuroscience often grapple with this type of task - it's not exactly;y my field, however.

Thanks for your question! :)

tonicstrength16 karma

Are you a phD student and entrepreneur designing invasive sensors for the brain that enable electronic communication between brain cells and external technology?

nanathanan23 karma

Yes!

holyfudgingfudge16 karma

How do you take the wave-like electrical signal from the brain, and translate these into computer language in a way that you can analyze what is going on? Or do you store the signal as-is and worry about analyzing later? How do you capture signals, EKG? This is fascinating stuff!

nanathanan35 karma

The waves you are referring to are typical of EEGs, MEG, and EMG, which record activity in large populations of neurons. These devices measure activity in an area of the brain and there are large fields of research making sense of what can be inferred from this information. An EKG is used for measuring the electrical activity of cardiomyocytes (heart cells).

I work with invasive sensors that record electrical potentials (ideally) from individual neurons. The sensors pick up a neuron's own action potentials (APs) as well as local field potentials (LFPs), which are a local potential fluctuation due to activity in nearby neurons. These 'spikes' in activity can be represented, via several intermediate steps, on a computer, reflecting cell firing events at the sensor location at a given time. This information is still not descriptive of the information being passed between neurons, as we don't know what neurotransmitters are involved. However, with a dense array of sensors within a given neuronal nucleus, one could attempt to decipher the type and purpose of the activity in the given neuronal circuit.

frog_at_well_bottom12 karma

What do you find is the biggest hurdle in this technology?

nanathanan24 karma

There are two main hurdles (in my opinion): biocompatibility and regulation.

Biocompatibility: Most of our electronics today comes from an era of CMOS silicon-chip technology, which isn't flexible or biocompatible. The research at the moment is waiting on developments in flexible electronics, conductive polymers, etc.

Regulation: Trying to get a medical device clinically approved, means the materials used in it also need to be clinically tested/approved. Almost all the best sensors for neural interfacing from flexible electronics research use novel materials that have never been tested in the human body. Sadly, trying to get a new material approved for medical devices is an insanely expensive and time-consuming process that even the most well-financed medical device companies avoid it.

fatbadg3r9 karma

My daughter was born with unilateral hearing loss. The auditory nerve on that side never developed properly. She uses a hearing aid that conducts the sound waves through her skull to the other side. She hates using it. Is there any technology on the horizon that would be an improvement over bone conduction?

nanathanan10 karma

Well, there are cochlear implants, but the currently available devices aren't very advanced so it would hardly be an improvement. There may be another solution being researched for your daughter's case, but I'm afraid I haven't heard of it. I'm sorry I can't be more helpful. I have a family member who was born deaf and has cochlear implants, so it has always been an important/interesting topic for me as well.

Someday, with a neural interface, it may be possible to stimulate the language centers of the brain to communicate. This wouldn't fix your daughter's hearing though, and this technology reaching consumer could be a long time away.

mas12347 karma

How close are we to wireless “telepathic” communication with devices? And when that happens, how do we install ad blockers?

nanathanan6 karma

Q1: The technology faces a number of regulatory hurdles, as well as biocompatibility issues. The best full-technology-stack neural interfaces developed today (by Neuralink) are still having biocompatibility and sensor lifetime issues. As a part of the neurotech community, I'd hope these issues are solved in the next 1-5 years and that we get a commercially available neural interface within a decade, but it's anybody's guess right now whether this will actually happen.

Q2: I genuinely don't see neural interfaces ever working in a fashion where external entities can influence your mind through them without your consent. This would be an infringement of rights and I don't know anybody in the neurotech community who would want to allow such a thing. Just like with any powerful new technology, neural interfaces will need to be tightly regulated.

little_raaaaay6 karma

How old are you?

nanathanan12 karma

I'm 28

lemonslip5 karma

What’s your opinion on the Tesla neuralink? How viable is it and do we see it coming to market soon?

nanathanan3 karma

Agree in general with u/i_shit_my_spacepants' comment. Neuralink has certainly made quite sensational claims to increase hype around the project.

Nevertheless, they are taking some of the best research in academia and trying to get it through the many stages of animal and human testing to get regulatory approval. Nauralink also has some of the world's foremost experts working with them and advising them, so it is a very promising project.

When Neuralink does eventually bring something to market, possibly 5-10 years from now, it's likely to be for a number of niche clinical applications for treating neurological disorders.

Wheredoesthetoastgo25 karma

How do you explain what you do to your older family?

And how close are we to uploading our consciousness to the cloud? I need to know before about... Oh, 2065?

nanathanan24 karma

Q1: It depends on the family to be honest. If it's someone I want to have a discussion with, I'll tell them I research sensors for the brain with the goal of treating neurological disorders like Parkinson's/Epilepsy/depression. If it's someone I don't want to try to explain things to, I will just say I'm an engineer - this is a surprisingly effective way of avoiding further questions.

Q2: Short answer: I don't think it will ever be possible to 'upload' your consciousness.

Longform answer:

Well, first we need to come to some consensus on what exactly is the intangible collection of qualities that we call 'consciousness'. This is actually ill-defined and very poorly understood. Better neural interfaces will let us further study the brain and take neuroscience to new eras of understanding - this is why this technology is so important beyond 'upload your brain to the cloud' interest we see in popular culture.

For the sake of an answer, I'll define consciousness as: your personal collection of memories, learned behavior, the genetic make-up of your brain that is unique to you, and your ability to compile this information to make decisions and control your physical body.

It's difficult to say whether it will ever be possible to 'upload' or essentially 'move' your consciousness to a computer. At the moment, it seems it will only ever be a copy of the brain that we can replicate on a computer. The physical consciousness in your brain will always be physical.

MR-DEDPUL5 karma

I'm a psychology major, what kind of studies would I need to pursue in order to research this once I advance further in my academic studies?

How far are we from wetware systems a la Iron Man (eg interacting with technology seamlessly as if it were another limb)?

nanathanan4 karma

You would need to either do an undergraduate in electronic engineering or materials science in order to work on sensors.

In order to work with the data that comes from the sensors, you could do a bachelor's or maybe even masters in computational neuroscience to get started in that field.

If you want to just use the technology and study the brain, then I recommend doing a master's and then Ph.D. in neuroscience.

duxs4 karma

Do you have any interest in none-invasive interfaces? For instance; CTRL-labs' wrist band or the 'transcranial magnetic stimulation' technology being used in the medical field?

nanathanan3 karma

I find that the non-invasive tech out there is just skimming the surface fo that BCI's can do. That sort of tech is useful in diagnostic tools used in hospitals and a few gimmicky commercial applications, but nothing truly world-changing.

ultranothing4 karma

Could we ever have video games in the future where all five senses are hooked up to an artificial world?

nanathanan5 karma

It's not a physical impossibility. Unless it goes against a law of physics, no technology is impossible.

There are people working on stimulating and sensing the optical nerve. There are researchers also working on stimulating taste/smell. Stimulating haptic feedback is already done and widely used. Headphones technically stimulate your sense of hearing, so that already exists.

mtanfpu3 karma

Sorry that I'm late to the party. What would you think would be the sociological impact of bci ? For example will it increase or decrease social inequality?

Best of luck in your work, hope to use your product someday.

nanathanan2 karma

It's too early to say how BCI's will be realized and the sociological impacts they will have. Keep in mind that for a long time this technology will primarily be used for treating neurological disorders and other ailments in a clinical setting.

Krubanosuke3 karma

Serious question, do you need test subjects?

I am willing to devote myself to this because I believe in this kind of research.

We as a society of humans have damn near integrated ourselves with technology to the point of dependency so I feel this is the next logical step.

I am not a scientist, I have no degree within any medical field to assist you academically, or money because well I am poor.

You are welcome to my brain. I'm not using it much anyways.

nanathanan2 karma

I don't currently test my devices on humans! I'm still a long way away form that. I've designed them for the purpose of being implantable, but there are still at lmany years of research before I get to that stage. At the moment I'm just testing to confirm that my sensors work in a lab setting.

There are companies currently working in this space that are moving towards or already testing in humans, but this will be in a clinical setting to treat specific neurological disorders. It will be several years if not more than a decade before these will be available to healthy people.

automotiveman3 karma

Selfish question, as someone who 4 years ago had an eye removed how far away are we from "bionic" eyes for lack of a better word. Something that could transmit images directly to our brain for creating or renewing eyesight. I am of the impression that this so far is beyond reach given our current knowledge and technology?

nanathanan4 karma

I haven't seen any amazing cuff electrodes that can interface with the optic nerve yet, which would be required for some form of a bionic eye. I think this would go down to innovation in sensors, after which it goes down to the extremely complex surgery and mapping of the neuronal activity of the optic nerve. I'd guess this is a long way away, easily 10-15 years.

TasarasTheGreek3 karma

What about ms or parkinson's disease? Are you working on sonething for diseases like them?

nanathanan4 karma

Neural interfaces are already being used to treat Parkinson's. Look into Deep Brain Stimulators. DBS devices developed in 1997 are still being used today and they are rudimentary compared tow hat we can do with research devices today. Hopefully, better DBS devices will pass clinical trials and reach patients in the coming years.

Kilruna2 karma

In your Opinion, how long will it take for a commercial available interface (comparable to what we see with the spread of smartphones now) and do you think this assumption is realistic?

nanathanan2 karma

I think it is very realistic. It all depends on funding, but I think within a decade we could see the first occurrences of this.

someguynamedaaron2 karma

Through your studies and experimentation, how has your perception of free will changed?

nanathanan3 karma

I've had the discussion on whether 'free will' exists many many times, my opinion hasn't changed from: you have to believe in free will (if you really think about it, it's an oxymoron).

It always boils down to how do you want to define free will, which is largely subjective and also rests on other ill-defined concepts (consciousness for example). I think it goes beyond the topic of this IAMA as its more philosophical than physical, but certainly an interesting topic for another post!