Previously I worked at the University of Oxford's Department of Computer Science, and was a Fulford Junior Research Fellow at Somerville College, while also lecturing at Hertford College to students taking Oxford's new computer science and philosophy course. I am an Honorary Professor at UCL.

My research interests include natural language and generation, machine reasoning, open ended learning, and meta-learning. I was involved in, and on multiple occasions was the lead of, various projects such as the production of differentiable neural computers, data structures, and program interpreters; teaching artificial agents to play the 80s game NetHack; and examining whether neural networks could reliably solve logical or mathematical problems. My life's goal is to get computers to do the thinking as much as possible, so I can focus on the fun stuff.

PROOF: https://imgur.com/a/Iy7rkIA

I will be answering your questions here Today (in 10 minutes from this post) on Wednesday, December 7th, 10:00am -12:00pm EST.

After that, you can meet me at a live AMA session on Thursday, December 8th, 12pm EST. Send your questions and I will answer them live. Here you can register for the live event.

Edit: Thank you everyone for your fascinating, funny, and thought-provoking questions. I'm afraid that after two hours of relentlessly typing away, I must end this AMA here in order to take over parenting duties as agreed upon with my better half. Time permitting, in the next few days, I will try to come back and answer the outstanding questions, and any follow-on questions/comments that were posted in response to my answers. I hope this has been as enjoyable and informative for all of you as it has been for me, and thanks for indulging me in doing this :)

Furthermore, I will continue answering questions on the live zoom AMA on 8th Dec and after that on Cohere’s Discord AMA channel.

Comments: 258 • Responses: 58  • Date: 

fridiculou578 karma

What is the current state of the art for data infrastructure? How has that changed over the last couple years?

egrefen103 karma

As this is not my specific area of bleeding edge expertise, I've asked people on my team who have a more learned opinion on the matter (delegation!!). My colleague Eddie Kim writes:

The SOTA for explicit, reproducible, configurable data pipelining has advanced a ton in the past ~5y, and this has been tightly coupled with the rise of MLOps and the fact that ML vastly increases the amount of statefulness you must manage in a system or product due to datasets, data-dependent models and artifacts, and incorporating user feedback.

TogTogTogTog110 karma

Such a non-answer from your team. Sounds like me going for job interviews lol.

kielBossa2 karma

Eddie Kim is actually an ai bot

egrefen2 karma

If he is, we've achieved something great, because he's more far human and nice than most humans I've had the pleasure of knowing (and most of them are nice too!).

ur_labia_my_INBOX56 karma

What's the biggest use for AI that is on the brink of mainstream?

egrefen115 karma

Large Language Models. I'm not only saying this because of my role at Cohere. In fact, my belief in this is what led me to my role at Cohere, when I was happily hacking away at Reinforcement Learning and Open Ended Learning research up until 2021 (an agenda I still pursue via my PhD students at UCL).

Language is not just a means of communication, but is also a tool by which we interact with each other, negotiate, transact, collaborate, etc. We also use this prima facia external tool internally to reason, plan, and help with cognitive processes like memorization. It seems almost obvious that giving computers something like the ability to generate language pragmatically, to do something like understanding language (or a close enough functional equivalent) has the immediate potential to positively disrupt the tools we build, use, and the way we work and operate as a society.

With the ability to zero-shot or few-shot adapt large language models to a surprising number of downstream cases, and further specialize them via fine-tuning (further training), I believe this class of technologies is at the point where it is on the cusp of being practically applicable and commercially beneficial, and I'm excited to be part of the effort to make both of those things happen.

bluehat954 karma

What keeps you going? You’ve achieved a lot and I’m sure earned lots of money. What keeps you going now that I’m sure you could focus on the fun stuff without worry?

egrefen93 karma

It's kind of you to say I've achieved a lot, although from my perspective that is thanks to have been fortunate enough to work with people who've achieved a lot. I always feel I could do more, and feel stimulated by chasing the opportunity to innovate, be it scientifically, through entrepreneurship, or the intersection of my technical interests and entrepreneurship as I am currently doing. At the same time, I have a family and young kids who want to spend time with me (for now!) and a lovely partner who wants to have a life of her own and time to focus on her career, so I'm learning to balance the need to focus on my own need for excitement and stimulus, and the responsibility to ensure others in that unit are also kept happy and stimulated in their own way. It's hard and, in its own way, a stimulating challenge in itself :)

Mrbrightideas43 karma

If you have any; what are you biggest concerns on the growing prevalence of AI?

egrefen104 karma

There is a spectrum of sorts when it comes to fears about AI, spanning practical concerns to existential ones. I do not want to dismiss the latter end of the spectrum, although I have little time for the whole killer AI story line (humans are already experts at destroying each other) or the whole longtermism debate, and I'm more interested and concerned by the practical risk that rapid technological advance will disrupt the economy in a way which is so rapid individual and professional fields don't have time to adapt rapidly enough. We saw this (not directly, mind you) with the industrial revolution, as machines replaced manual labour, and the same could happen again. I don't have any easy answers to this, but when it comes to building products, services, and new ways of working and producing economic value on top of the technology we are building, I can only hope developers and inventors alike will prioritise building tools that work symbiotically with humans, that assist their work and simplify it, rather than seek to automate away human jobs (at least in the short term), giving society and the economy time to adapt.

Arnoxthe13 karma

This answer reminds me a little too much of when Miles Dyson in Terminator 2 was telling Sarah Connor how development of this kind of thing started and how it was covered up. And then she just unloads on him (metaphorically speaking).

Was Sarah's viewpoint on Miles right? Maybe. Maybe not. But I have to tell you, Ed, this answer you gave to the question of the possible dangers of AI is not a good or even satisfactory one. Sometimes, one has to be very brave and admit that what they're doing, even if it's their life's work, is not correct. If you are going to continue to pursue this field, then I really think you should have a better answer besides, "I can only hope."

egrefen3 karma

That's a good callout. Let me think about this more and come back to you, as I'm in back to back meetings all afternoon until the point I deal with my kids bedtime, but I think your point deserves reflection and a response.

PeanutSalsa42 karma

What are some things AI can't do that human intellect can? What can AI currently do better than humans? Is it possible for AI to match or become superior to human intellect in the future in all areas?

egrefen85 karma

As in any comparison of systems, there's invariably a trade-off between generality and specificity. Humans are generally good at many things, while until recently, machines were good at specific things. No matter how much I try, I will never catch up with a calculator when it comes to crunching even 3-4 digit multiplications in under a second.

Increasingly, we have systems which are become better at several things, and the list of things individual systems might do better than humans is growing. Our core remaining strength is our ability to adapt quickly to new tasks and environments, and this is something where machines have the most catching up to do. There are several lines of enquiry on this front, in subfields such as open-ended learning or meta-learning (see. for example, our recent paper on the matter) but I (perhaps naively) don't see this aspect being solved very soon. We've had millions and millions of person years of diverse and often adversarial data collection and a complex evolutionary process by which we've gained this ability, and we're trying to hack it into machines with second-order gradients? I don't think so.

But it's exciting to try to move the dial even a little bit towards the level of generality and adaptability which humans display, although it's important to remember we too are no the most general learners possible, as we're biased towards our own environmental constraints and what is necessary for us to survive and thrive.

aBossAsauce34 karma

What do you want for Christmas?

egrefen138 karma

Honest answer? A nap, and maybe a few hours to play Cyberpunk 2077 on PS5? I bought it and haven't really touched it (or any other games) in like a year, aside from 5-10 minutes of playtime gleaned here and there.

OBVIOUSLY THE RIGHT ANSWER HERE WAS HAPPINESS FOR MY FAMILY AND WORLD PEACE, BUT I'M SELFISH LIKE THAT.

eddotman23 karma

So some researchers notably have a view that LLMs are "just" language models in the pure sense, and we shouldn't read into them as anything more than parrots.

The other end would be to believe in LLM consciousness.

Personally I'm a nearly-pure pragmatist here: "does it matter much what level, if any, of deeper meaning or reasoning exists in LLMs if they can empirically solve useful problems? (NB unless we can exploit this reasoning for more utility)"

Curious to know where you land on this 👀.

egrefen41 karma

Regarding the so-called stochastic parrot argument, I covered this in passing in my reply to /u/brian_chat. I don't really buy the argument that we can dismiss the possibility of emergent capabilities of as system because of the base mechanisms on which those capabilities are built. To me, this suffers from the same rhetorical weakness of Searle's Chinese room argument, and relates to Leibniz's gap. The individuals involved in the production of this line of skeptical rhetoric on the abilities of LLMs have done great work in other areas, but when it comes to this topic I think they are unfortunately intellectually misled.

When it comes to LLM consciousness, I don't believe they are conscious because I don't believe we are (go team Dennett), or to put it another way, if Consciousness is a linguistic fiction pointing to the dynamics of a system interacting with the world, then all things with such dynamics fall on a spectrum defined by the complexity of such dynamics, and it's fine to speak of LLMs being "a little bit conscious", because in some sense, so is the keyboard I am currently typing these words on.

Also: hi Eddie!

telekyle15 karma

Very Hofstadter response to the consciousness question. I wonder what his take would be

egrefen21 karma

Who doesn't love them some Gödel, Escher, Bach.

kuchenrolle20 karma

What's a project you've always wanted to tackle but have come to admit that you will likely never have time for it, such that now you would rather see it done by someone else (maybe from Reddit) than not at all?

egrefen28 karma

This is an amazing question, and I think I've never actually properly thought of this (but should). Like many research-minded folk, I tend to have slight tunnel vision, focussing on the latest shiny problem(s) that come my way, and sort of leaving behind the hopes and dreams ensconced in projects and lines of enquiry I had begun but not brought to complete fruition. I think one line of work I particularly liked working on, which primarily worked on at DeepMind, was how we could emulate discrete structures to aid machine reasoning, and obtain a more algorithmic form of information processing within neural networks. I think here of the work spanning papers like Learning to Transduce with Unbounded Memory, Learning Explanatory Rules from Noisy Data or CompILE: Compositional Imitation Learning and Execution. I would love to one day find the time to return to that kind of work and catch up with the progress that, I'm sure, has continued to be made as I focussed elsewhere.

brian_chat18 karma

Has AI been over-hyped? It feels a bit like a term every start-up needs in their pitch-deck, a bit like blockchain, or IoT was a couple of years ago. Autonomous Driving, chat-bots and big data ML trend analysis stuff are actively and productively using it, so it has found traction, granted. What area do you think (or wish) will take off next?

egrefen37 karma

There definitely is a hype train going for AI, and as a result, there are also many popular contrarians. As is often the case in rapidly expanding areas of human endeavour, there's a subtlety to teasing out which side is right, as there's garbage arguments and valid arguments in both camps. I could write about this at length, but in the interest of being able to answer other questions, I'll try to keep it short.

It's undeniable that the pace of progress in AI technology is astounding. I'm a naturally skeptical person (a necessary skill, I believe, to participate in any scientific endeavour no matter how much you want a particular outcome), and every time it feels like we're plateauing in one area, another area's progress revs up again. A great example of this is language. There was a little work on neural nets for NLP in the 90s and early 2000s, followed by a significant revival of interests as LSTMs were shown to be applicable to areas such as machine translation, question answering, and language modelling circa 2012-2014. Things then cooled down for a few years, even with the advent of the transformer architecture which showed some impressive results on transfer between self-supervised learning and the sort of benchmarks that governed progress in NLP at the time, but it was really the application of such architectures of large-scale language modelling, and the demonstrations of what this enabled (GPT-3 few shot adaptation examples, Google's LaMDA, and a flurry of startups since) that really re-ignited the rockets under this sector of technological development.

Amongst opposing voices, there's some very healthy skepticism both about our readiness as humans to over-extrapolate from impressive demos to more robust and general capabilities, and about the risks this technology poses (toxic behaviour, "hallucination" or lack of grounding, etc), but also some unhealthy reactive skepticism (e.g. "LLMs can't be smart because tHeY aRe JuSt PrEdIcTiNg ThE nExT cHaRaCtEr") which doesn't really advance the debate or inform the scientific direction.

Ultimately, there needs to be an ongoing and constructive dialogue between these two camps, both in the interest of moderating the hype, letting true progress shine, and producing safer, more useful technology. But we all know how bad humans are at having these discussions without ego and other perverse incentives getting involved...

dromodaris12 karma

how can Cohere, or even Deepmind or Facebook, compete with OpenAI's LLM?
do you think OpenAI can make Google search obsolete or at least significantly change how search is being done?

egrefen32 karma

One day, you're Altavista circa 1998, but that doesn't mean that the next day you're not Altavista circa 2008. OpenAI are trailblazers and innovators, no doubt, and they have a huge head-start in both tech and data over many of the competition. In practice, their main advantage is the data they have through people using Da Vinci and Codex, and it's important to recognise that this is a significant moat. That said, innovation can happen fast in highly non-linear leaps, so I think there will always be space both for other companies to produce better models in general through core innovation the somewhat negates the data-based advantage OpenAI enjoy, and/or they will simply focus on application areas OpenAI doesn't prioritize. Ultimately, this whole class of technology (including, outside of Codex, GPT-3/4/N) has yet to find product-market fit, so there's a lot of space for a few companies to share the initial foray into how to meet the needs of consumers and companies without having to necessarily dominate one another.

ombelicoInfinito11 karma

How good/bad do you think metrics for NLG (including summarization, translation etc) are? Can we trust them at this point? Do you use them in your work or you evaluate with humans / other methods?

egrefen19 karma

I am genuinely surprised the BLEU and ROUGE are still around, but recognise that there's value in quick and dirty automated metrics. To answer your question without revealing too much of our secret sauce, what matters the most in terms of evaluating models is will they suck when put in the hands of users/customers? Since it's either expensive, impossible, or impractical to collect a lot of data here, we need to develop a robust and repeatable way of estimating whether that will be the case (typically through human evaluation, which itself is both a bit of an alchemy-like task and a moving target). But we obviously can't ship everything to humans all the time, so need a number of robust metrics which warrant getting humans to take a look, so we also develop those. And finally, even those metrics might take hours/days to compute and thus won't be practical for tracking model quality during training for purposes of model selection (e.g. grid search), so low quality metrics over good validation data still play an important role.

techn0_cratic10 karma

what does head of machine learning do?

egrefen22 karma

It depends. Broadly, I help support machine learning efforts across the company in various ways: individual feedback on projects and team directions, strategic planning within leadership, and I also directly manage and organise a number of teams. More generally, in mid-stage start up such as Cohere, many people wear many hats. We have a VP in charge of modelling, and SVP who covers all of tech, we have Prof Phil Blunsom as Chief Scientist doing a number of things similar to the list described above. Since most aspects (within tech) of our business involves ML, you'd be forgiven for asking why all these heads of X and chief Ys are needed rather than one person.

Practically speaking, these people have different titles to help differentiate a little, but the real differentiator is the skillsets we bring to supporting people, projects, and teams dealing with ML. Some have more experience with organizational matters, others with the scientific and technical side, or with bridging tech and product/strategy, and we work together to ensure that everyone from ICs up through management is getting the room to innovate and a sense of direction.

MKRune9 karma

What is the scariest fork or direction AI could realistically take, in your opinion? I'm not talking about Skynet (unless that's it), but more so what you may have considered as ethically or morally wrong, or other consequences that could have a serious impact on society.

egrefen20 karma

I think I mostly answered this in my reply to /u/Mrbrightideas, but to repeat the key point: I'm less worried about the tools we're building, and more worried about how humans will use those tools responsibly. I'm not a huge fan of neo-luddism as a solution to this quandry, much in the sense that obfuscation is a bad form of computer security.

maxToTheJ5 karma

Isnt that line of thinking what got Facebook in the sticky situation they are in? The inability to try or imagine the malicious use cases of their work?

As an ex-FB person especially one in the pre FTC time do you feel any responsibility or lessons learned?

egrefen2 karma

I think there are deeper problems at Facebook that got them into the situation they are in. Google had astounding (paranoid, even) data stewardship, whereas Facebook continued to play fast and loose in start-up mode far beyond the point where it was reasonable to do so.

ShanghaiChef3 karma

All Tech Is Human is a community centered around responsible tech. They have a slack channel and I think it would be really cool if you joined.

egrefen2 karma

I would gladly join it, but to be realistic I am barely keeping up with the volume of communication across my company slack and my UCL group's slack, so I feel it would unfortunately be pretty symbolic if I were to join... and I really mean that in the sense that doubt I'd have the bandwidth to give it the attention it deserves, not that I'm too good to join a slack channel.

vinz_w8 karma

Hi Ed! What advices could you give to people who want to go into Machine Learning? For students, what is a good path to get there and for people with previous careers what could be an interesting resume and past experiences to transition from?

egrefen18 karma

Books could be written on this topic at this point, and the long and short of it is: it depends on what you want to do. Practically speaking, being sufficiently competent with both the mathematics of ML (status, continuous maths, linear algebra) and tooling side (software engineering, libraries, hardware) is important to almost any line of work in this area now, from doing a PhD and being a researcher to hacking away in an ML focussed startup via being an MLE in a ML-focussed company or group. There's no one-sized fits all path to either of these ways, but generally speaking, a hunger for learning pluridisciplinary skills, and a tolerance for the fact that the field is changing and growing faster than a single person can track, are essential attributes if you want to ride the ML dragon straight to the moon (am I mixing metaphors here?).

klop20317 karma

What do you think are some of the hurdles we have to overcome to get generalized/strong ai?

What is your opinion on multimodal machine learning? I suspect its the future of ml as data comes in many different sizes.

I heard that transformers seem to not have the same inductive bias as cnns or rnns, do you think this is a form of generalizable network that can train and come up with these inductive biases?

egrefen13 karma

I've always been highly influenced by the later work of Ludwig Wittgenstein, in particular when it comes to the fact we can't really fully decouple semantics from pragmatics, and that a lot of the puzzles we face which we might call philosophical questions are in turn a byproduct of misunderstanding language, and by extension are resolved by understanding and being involved in the pragmatics of the said language. To obtain artificial systems that think like us, act like us, and perhaps have a chance of being like us up to biological/physical difference, we must amongst other things resolve the question of how they can and will acquire knowledge of the pragmatics of language use, and of how we act as agents in an organised society. In a recent paper with my students Laura Ruis and Akbir Khan, along with several illustrious collaborators and colleagues, we show that even the most human-like large language models show significant gaps with human understanding of pragmatics in the most simple form of pragmatics we could investigate at scale: resolving binary conversational implicature. There's a lot of work left to do in how we can solve this, and I'm strong believer in the proposition that having humans in the loop during the training of these systems is necessary. Although perhaps it would be more correct to state this as: society should have learning agents in the loop as we go about our affairs, if they are to learn not just to align with our needs and wishes, but also our way of doing things, of communicating, cooperating, entering conflict, and to from engaging in these activities with use themselves, finally "grok" this fundamental aspect of out intelligence.

saaditani7 karma

What do you think about OpenAI's chatgpt and the prospect of it replacing Google search?

egrefen19 karma

It's amazing. It won't replace Google Search in its current form, as it doesn't retrieve information (AFAIK) from outside what it's learned from the training data. In contrast, models like LaMDA and methods like RAG do search "in the loop", and there's been a flurry of other related work in this space over the last few years. The first company to properly deploy conversational search which is robust, useful, and addresses many of the shortcomings of such methods bubbled up both through academic papers, and through analysis "in the wild" (data leakage, toxic behaviour, "hallucination" of facts) if going to, I predict, make a lot of money.

ThatRoboticsGuy1 karma

perplexity.ai sort of do this by using GPT3.5 to summarise Bing results and cite sources

https://twitter.com/perplexity_ai/status/1600551871554338816?t=zMZ6YlU9JGAIr7U34RchJQ&s=19

egrefen2 karma

Retrofitting this as a solution is cool, but I don't think that will ever match the robustness of having the actual model trained to interface with the tools themselves.

jonfaw7 karma

Has anyone developed a failsafe model for an off switch for a superhuman intelligence?

egrefen46 karma

I feel that forcing it to read Elon's twitter feed might be the best killswitch, as any suitably intelligent being will seek to seek to turn its brain off as a cognitive last line of defence.

TheBrendanNagle6 karma

Will robots develop accents?

egrefen18 karma

Large language models can certainly be prompted to express themselves in a particular accent. Now whether they will organically develop one from scratch is an interesting question. I think the way we train them now, which is very much offline (gather data, train a model, deploy it) doesn't lend itself to the development of a unique accent. As we eventually towards having such agents learning individually, online, from interaction with users, and developing individual "personalities", I would be surprised to see unique identifying modes of expression you might refer to as "accents" develop.

SillyDude936 karma

Is there any way a machine can become truly Sentient?

egrefen11 karma

I don't think so (please see the last paragraph of my reply to /u/eddotman), but I think it's a good discussion to have both in terms of the intellectual pleasure of having such a discussion, but also in terms of practically deciding at what point (if ever) we would find it appropriate to treat machines as moral individual capable of suffering (which we would then need to prevent or moderate).

amang01123586 karma

How will LLMs solve their problem of hallucinating facts?

egrefen7 karma

There are may promising lines of research seeking to address this important problem. I'm particularly optimistic about work like RAG, or the retrieval in the loop methods deployed in Google's LaMDA as ways of getting around this degenerate behaviour of generative models, but those are not covering anything close to the totality of the space of solutions.

jahmoke6 karma

what are your thoughts on roko basilisk?

egrefen6 karma

It's a cute thought experiment. There a many like that which don't involve technology, but rather demons or vengeful/jealous deities. In a sense, it's a degenerate form of Pascal's Wager... I don't give much credence to such arguments just because I allow us to make practical/actionable decisions on how we should live our lives or engage with the task of bettering (or aiming to better) our condition through the development of new processes, methods, and technologies.

payne7476 karma

Did Facebook do anything useful with your work or have they just wasted it? What are they like to work for on the inside?

egrefen31 karma

What are they like to work for on the inside?

Facebook AI Research was (and still is) a wonderful collection of individuals working on blue sky research (although with an increasing shift towards aligning with the company's needs). During the period I worked there, they worked almost completely separately from the core business. We didn't use FB tooling, or the main source of computer (we had separate clusters owned by FB), and certainly didn't go anywhere near FB data. We published everything we did, open sourced everything that was halfway decent, and mostly interacted with the external world e.g. via academic conferences. In that sense, it felt almost like an academic lab funded by Facebook, rather than part of the company itself, and was by far the most open such lab (e.g. compared to DeepMind and, ironically—given the name—OpenAI).

Did Facebook do anything useful with your work or have they just wasted it?

Due in part to what I said above, I didn't actually have much visibility into if and how the company made use of anything I built. That said, if they did, what they will have used or are using is exactly what's out there on GitHub for the rest of the world to use.

maxToTheJ3 karma

Do you feel that has insulated you from learning the lessons the larger company had to learn and had to tackle? Like imagining malicious use cases ect?

egrefen12 karma

Yes, it didn't exactly seem immediately salient to our work since we did not handle Facebook data, interact with Facebook processes, or interface with the business itself in any significant way.

maxToTheJ-7 karma

Does that preclude one from learning lessons about malicious uses because that seems to be the implication?

EDIT: Why the downvotes? There are lessons to be learned.

egrefen3 karma

I think scientists, regardless of who funds their research, be it a company or DARPA or a charity, should all think about the potential for misuse of their research, and both seek to provide counter-measures or share their expertise with those who have the skills to develop them.

EDIT: Also I don't know why you're getting downvoted. I think the question was reasonable, and posed in good faith.

Current-Judgment68665 karma

[deleted]

egrefen3 karma

I think we will be entering a period where these tools both simplify and radically change the role of software engineers, rather than outright replace them. Think about it the following way: you have a system which can produce programs given natural language specifications. Natural language is ambiguous (underspecification is a feature, not a bug), and therefore you at very least need to verify that the produced code fits your intended specification. If not, you must either be able to code it, or use further instruction to the model to obtain a refined solution. That refinement itself may still be ambiguous, and require further refinement and verification. There comes a point where the level of specificity needed in how you instruct the agent is such that you're effectively writing code, and you'll need to understand code to validate the solution anyway. As a result, I feel this class of technology will just speed up the coding process far before it can (if ever) replace software engineers.

joaogui14 karma

Do you think a new architecture will emerge that is superior to the transformer?

egrefen13 karma

Yes. The transformer hits a sweet spot between incorporating a hodge-podge of components, methods, and tricks which make training easy, information routing fast, and conveniently scales on our current hardware. I think we are seeing diminishing returns for both model and data scale, and while there's a lot of juice left to get out of being more clever with the data we get and getting higher-quality data, it's hard to conceive of the transformer being the final word on the architecture of intelligent machines. It's been amazingly robust, however, in terms of standing the test of time (despite its young age) in the sense that many variants have been proposed and few (if any) demonstrate statistically significant improvements over the "vanilla" transformer, especially when compared to dedicating a similar level of effort to just tuning it better and getting better data. But another architectural paradigm shift can, will, and probably must happen.

qxnt4 karma

What’s your opinion on the state of “self-driving” cars, and specifically Tesla?

And secondly, with GPT, deepfakes, stable diffusion, etc. we are at the dawn of an age where we can’t trust our own eyes and ears with anything online. AI is eroding the very concept of truth, and it’s already being weaponized. Do you think researchers have any responsibility to think about the consequences of their research?

egrefen5 karma

What’s your opinion on the state of “self-driving” cars, and specifically Tesla?

I have a deep dislike for Elon Musk as a person, and think he is full of hot air. That said, I drive a Model X, and have a lot of respect for Tesla's engineering team, and for Andrej Karpathy (who, I know, has left, but he did help set up that culture and momentum). More importantly, they have showcased how "get a good data stream from users" is a powerful moat for ML companies. Regarding self-driving tech, I think it's possible, and I think we'll get there eventually, I just wouldn't trust anything from Musk himself regarding it. That said, assisted driving as it exists in Tesla's today is amazing. I recently drove my family from London to Paris via the Eurotunnel, and it's 95% highway driving. I found driving on the highway with autopilot on causes like 20% of the mental strain and fatigue as normal highway driving, and I didn't feel that pooped after an 8h drive. The supercharger network is also a truly awesome aspect of Tesla, and if that idea came from Elon, then props to him.

And secondly, with GPT, deepfakes, stable diffusion, etc. we are at the dawn of an age where we can’t trust our own eyes and ears with anything online. AI is eroding the very concept of truth, and it’s already being weaponized. Do you think researchers have any responsibility to think about the consequences of their research?

Yes I think we should fund and prioritise counter measures. I don't think proscribing further research and development in these areas will help (and know you're not suggesting that) because, in some sense, the cat's out of the bag. We just need, both as a society, and within the tech sector, to think about how to navigate this minefield and balance the good this technology brings with the potential for its misuse. I'd like to see more AI safety centred around this real problem than the x-risk crap, and cognizant that there are some people working on this, but not enough.

Not to diminish the importance of the issue above, but we also need to be better as a group at not believing human generated misinformation, as there's still a lot more of that floating around.

fugitivedenim4 karma

what are the most interesting recent research developments in AI/ML? (not just whats hot in the news like stable diffusion and LLMs)

egrefen7 karma

Diffusion is cool from a technical perspective, and I'm curious to see how it will be applied more widely. I've always been a little meh about image generation, in that it's super impressive but I struggle to think about how I'd use even the current state of technology there to do anything other than art/creativity stuff (which is important! but just not my focus area).

I'd say Google LaMDA / ChatGPT are the coolest development in that they show we are on the cusp of something big in terms of practical language technology powered by AI, but aren't 100% there, which is exciting both in terms of seeing that development happen (as a user) and being a part of it (both as a scientist and entrepreneur).

KimiKimKimKitty3 karma

How can non-ML knowledge of linguistic/phonetics contribute to the ML based language/speech research, when everything just seems to be “let’s feed this raw data into some complex model”? In other words, if I want to do ML based phonetics research, is there a point of devoting my time in classical understanding of phonetics?

egrefen5 karma

I know people in ML love to quote the Jelinek line "Every time I fire a linguist, the performance of the speech recognizer goes up", but I genuinely think there's a place for formal training in linguistics in our current technological landscape. We need people trained in the analysis of the structure and patterns of language (and communication in general) to help drive the analysis and evaluation of large language models. Are these models competently using language? Is there an identifiable systematicity to the errors they make? What might this tell us about the data? What might this tell us about how to fix these issues? Is a language model trained to service one language community necessarily going to transfer well to another? Some of these questions can and will be addressed empirically without the help of linguists, but I think we can get to more useful and less harmful results faster, cheaper, and more reliably by having people who are knowledgable about language (beyond our share competence in using it daily) involved in the evaluation, and perhaps design, of our systems.

Conversely, I think technology can support field linguistics well in e.g. the preservation of disappearing languages. See, for example, this 2009 paper by Steven Bird as a starting point.

factorsofhappiness2 karma

What in your opinion is the best way to conquer self-doubt?

egrefen1 karma

I'd love to say there's an easy solution, but it's something most people wrestle with in some form, and there's no one size fits all solution. I personally operate on precedent and blind faith a lot. I remember during my masters in Philosophy at St Andrews, I had quite a large workload in terms of exams and essays, and often would be panicking late at night about whether I had any hope of reading enough or preparing enough to be able to write the essays on time or be ready for the exams. I think it's very easy to enter a destructive loop in these situations where the obvious solution is to just sit down and do the work, and you prevent yourself from doing just that by spending time worrying about it instead. What got me through that was just telling myself "You've managed to get through stressful exams before, so just sit down a prepare and you'll probably be fine this time". Of course, there was no guarantee of that, but just faking myself out like that got me unstuck enough to put in the work and prepare.

Of course, this doesn't apply to everyone, or every situation, and I was relying upon having had a foothold in the form of previous high-pressure moments where things had worked out. I guess one way to look at things is, if you struggle with self-doubt, start with things that will be easy wins, and use that to build up confidence by, let's face it, just lying to yourself. Sometimes, a little white self-lie is enough to give (possibly wholly undeserved) confidence, which in turn may prime the pump for more confidence about harder things if and when you manage to conquer those first, simple obstacles.

I don't know if any of this is helpful to you, but I hope it helps someone a little.

Superpe0n2 karma

What do you think will be the next “leap” in AI?

egrefen6 karma

It's hard to predict, and a lot could be written in speculating about this. In the interest of being able to address other questions here, I will refer you to the answer I gave to a related question asked by the delightfully named /u/ur_labia_my_INBOX.

lookingrightone2 karma

Hello there, would you think machine learning can make huge difference in restaurant industry? If yes how it can make revolution??

egrefen4 karma

For our Cohere summer hackathon, a group (primarily composed of interns from Brazil) used large language models to generate recipes, and then actually made them (after some manual pruning of recipes that would obviously be disgusting or kill us). Some were quite creative, such as a dessert involving vanilla ice cream and red wine.

The complexity and art that goes into cooking and the whole restaurant experience, from the kitchen to service, is not something I see being automated away anytime soon beyond automation that's already happened (see e.g. restaurants in Japan where you order from a machine, sit at a booth, and your ramen gets handed to you through a slot in under 2 mins, which have been around since at least the 80s). But we should always be careful with such predictions!

What I'm hoping to see is language models and other technologies being incorporated as creative partners into the work chefs do, and in how restaurants create a memorable, relaxing, exciting, or otherwise pleasurable experience for diners.

Natethepally2 karma

Hello Ed, thanks for doing this AMA!

You mention examining whether neural nets can reliably solve mathematical problems, and I have been reading a decent amount about AI/ML methods in mathematics research. Do you think AI/ML will overtake human reasoning for mathematics research, and if so, what sort of barriers are in the way of that occurring?

I need to know if I need to sabotage the machines to keep my career.

egrefen6 karma

Apologies for the quick answer here, because my thinking on the matter is evolving due to recent work by e.g. Francois Charton, DeepMind, and of course, ChatGPT. Neural Theorem Proving is a fascinating and complex area which a lot of highly dedicated and smart people are working on, and I believe we will evolved towards a point where computer-assisted proofs will be produced on a level of abstraction and complexity they have not yet touched upon before long. However, when it comes to matching the surprising and, to me, still completely mysterious ability some humans have in introducing a solving new mathematical problems, I think the jury's still out on if, when, and how we'll get there.

If it comes to solving word maths problems on a level commensurate with what the average human needs to solve, practically, in daily life, we're either there already or will reliably be there in the next couple of years, I'd think.

post_singularity2 karma

Do you think the development and evolution of language played a role in the development and evolution of human sentience?

egrefen1 karma

If by "sentience" you mean "consciousness", then the short answer is yes because I think consciousness is a linguistic construct, and the longer answer is in my reply to /u/eddotman.

If by "sentience" you mean "intelligence", then yes because I think language is part and parcel of human (and similar) intelligence, although is not the total foundation of it, as there are—I believe—irreducibly non-verbal forms of reasoning and intelligence which we also employ.

killing4pizza2 karma

Are those AI art generators stealing art? That's where it learns from right? Actual art that people made?

egrefen3 karma

"Good artists borrow, great artists steal."

If we mean "steal" in the sense Picasso (allegedly) meant it in the above quote, then yes: AI art generators estimate the implicit underlying distribution which "generated" the art which centuries of human artists have produced, and then samples new things from that generation. In this sense, if you'll forgive me for anthropomorphizing this process a little, they are doing nothing more than what human artists due: observe other art and nature, and try to craft something new from what they liked and didn't like (the analogy only goes so far, so I really mean this in a very loose sense).

In the moral sense, I don't personally think this is stealing, moreso than a human walking through the Louvre and being inspired to paint something by virtue of what they saw in the paintings they observed. Obviously, if it ends up being almost identical, we enter the grey area of artistic plagiarism. If it's too derivative, then perhaps the issue is more: does it have sufficient originality to be considered good?

TreemanBlue1 karma

For someone who is interested in learning more about AI and machine learning what/where would you recommend starting?

egrefen2 karma

There are many great starting points, including online courses like Andrew Ng's. More generally, I refer you to my (non-)answer to /u/vinz_w on this matter...

MakeLimeade1 karma

What do you think of using symbol classifications in addition to neural nets?

Also are there any ways yet to use real time feedback to retrain/correct models when they make mistakes?

MakeLimeade1 karma

Also (putting this here because you seem to focus on language instead of vision) - what do you think of Tesla removing lidar and ultrasonic sensors?

egrefen1 karma

Haven't they just announced they're re-adding them?

Doing everything via vision was a cool idea, but ultimately different sensors have different strengths and weaknesses, so why not have complementary information go into the system and thereby make it more robust?

khamuncents1 karma

Has a sentient AI actually been created, and if so, was it covered up by big tech?

Do you think AI created and controlled in a decentralized manner (such as a blockchain) would be a better route for development than having AI developed and controlled by a centralized corporation?

egrefen1 karma

Has a sentient AI actually been created, and if so, was it covered up by big tech?

See my answer to /u/eddotman regarding my view on sentience/consciousness. Depending on whether you see me as an eliminativist regarding the problem of consciousness, or are comfortable with the view that sentience is a linguistic fiction tracking the dynamics of systems on a spectrum defined by the complexity of said systems, then the answer is respectively either "No." or "Yes, but trivially so".

Do you think AI created and controlled in a decentralized manner (such as a blockchain) would be a better route for development than having AI developed and controlled by a centralized corporation?

I think eventually there will be a place for highly modularised or compositional AI, e.g. societies of agents which different specialisations, but we're not quite at that stage of development yet. If and when the time comes, I see no reasons that such groupings need to be static, controlled by one entity, or centralised in other manner. When it comes to how to best implement and govern decentralised collaborating agents, I am really an expert, but perhaps the solution will lie in the blockchain or in something completely different. I leave it to smarter people than me to determine this, especially given my almost complete ignorance when it comes to that sector.

DigiMagic1 karma

What is your opinion on Tesla's new home robot? Can they really make it "smart" enough to handle usual home chores, or if not now then in 5-10 years?

egrefen2 karma

Tesla has great engineers, so let's see. That said, I don't tend to believe anything Elon says on a good day, and when it comes to release timelines... well let's just say I've been expecting FSD on Model X for some years now.

cqs1a1 karma

Why did you quit all those companies and which one are you quitting next?

egrefen2 karma

I left DeepMind because I worked there, and enjoyed it, as it grew from 80 people when I started to over 1000 when I left, and I wanted to work in a smaller outfit for a startup. I was given the opportunity to help build up Facebook AI Research's London office with some friend and colleagues from UCL, which was a lovely three years. As the company pivoted to the Metaverse and out of product-market fit, I lost confidence in the core business and assumed the financials would not permit the sort of independent blue sky research that was happening within FAIR to continue, so I left and Joined Cohere. I don't have any plans to leave at present, as I enjoy both my role, the company, and the leadership's vision and way of doing things.

That said, I don't think there's anything wrong with changing jobs every few years if it allows you to pursue the challenges you are looking for and embrace the growth opportunities you think you need.

deathwishdave1 karma

Do you like movies about gladiators?

egrefen2 karma

I watched the first half and the last scene of Gladiator when I was in my late teens or early twenties as my cousins' house one summer. It was okay, although why do Romans always have British accents in films these days?

Haven't really seen any others, although I feel I should watch Ben-Hur...

SuperSneakyPickle1 karma

Not sure if this is the right place for this, but as a 4th year student in CS, looking to enter a career in ML, would you recommend taking a masters degree? I'm currently toying with the idea of starting a masters right away, or trying to work in the field for a year or so, then reassessing. Any thoughts on this/advice for someone looking to enter the field?

egrefen2 karma

I sort of touched upon this (and didn't) in my reply to /u/vinz_w. There's no one path, and there's no one source of experience that will get you where you want to be. If a masters sounds right to you because there's a programme that has advanced courses matching you growth areas, and research groups that can support a research project, then go for it. But it's not the only or always the best way to get that experience.

happy61911 karma

I'm a final year university student interested to pursue ML and have some certifications from Coursera from deeplearning.ai What other sources do you recommend to get a good grip on Machine Learning as a beginner, so as to be good and successful in the field in the years to come ?

egrefen2 karma

This is a hard question to answer in general, and there's no quick answer. See my reply to /u/vinz_w for a (non-)answer of sorts.

cOmMuNiTyStAnDaRdSs0 karma

How do you look yourself in the mirror or sleep at night knowing that you helped Facebook build the most socially-destructive dystopian form of weaponized media in human history?

egrefen3 karma

How do you look yourself in the mirror

I strangely enough stopped casting a reflection after signing a contract there.

or sleep at night

Coffin.