workingatbeingbetter
Highest Rated Comments
workingatbeingbetter178 karma
In terms of the actual neuroscientific processes, do we know exactly how psilocybin "rewires" the brain?
I know that is a very high level question, but when I took a handful of neuroscience classes in undergrad on drugs and behavior (circa 2007), I recall researchers not having a clear answer as to what exact process(es) caused the underlying physical structures in the brain (dendrites, glial cells, etc.) to change and create various new and/or stronger connections in the brain. I believe one theory at the time dealt with psilocybin acting on the glial cells to sort of temporarily weaken them to allow for neuroplastic changes, but I'm about 15 years behind on the literature in this field so I'm curious if you all could shed some more light on this topic. Thanks in advance!
workingatbeingbetter153 karma
As a follow-up to that, Netflix has a notorious perform-or-perish culture. I'm curious what Marc thinks of the various human consequences associate with this culture (e.g., increased stress levels, substance abuse, early career burnout, etc.) and whether the company is working to combat some of these deleterious effects.
workingatbeingbetter60 karma
If it makes you feel better, I was one of those econ/poli sci majors that went to law school. I was in my third year (3L) in 2015 in a class called "Poverty Law" when I read "Why Zebras Don't Get Ulcers". I based my final paper in part on a portion of that book. I went on to get a physics degree and went to grad school for electrical and computer engineering, but your book was very much an influence on me when I was merely a law student.
workingatbeingbetter54 karma
For all that is holy, this. VBA is an atrocious language and since I only ever use it for a few Microsoft products, I have to relearn the nuances of VBA every time. It would be amazing to have better integration with Python or even C++ because I could finally redo some of the spreadsheets people at my office have used forever and I wouldn't have to explain to them the various things they might need to do to run a program outside of excel.
workingatbeingbetter319 karma
About 90% of my job involves quickly and accurately understanding papers in ML/AI from a wide variety of fields (I actually had to revisit two of John Langford's papers from the early 2000s on cross platform remote secure backup systems and his CAPTCHA paper with Luis von Ahn and others a few months ago) so I will give you some of my thoughts.
First, LEARN TO READ SLOWLY AND PRECISELY AND WITH A PURPOSE. These are not novels. You are almost always reading these papers with the intent to learn something. Figure out what that is ahead of time and seek it out in the paper. You do not have to read these papers serially from start to finish. Look for what you need and go from there. If you're very unfamiliar with the subject, read the title and then the abstract first. If you're pretty familiar with the subject, maybe you can jump to the conclusion first and then to the end of the discussion section. The more papers you read, the more you will understand this common structure and the more you'll be able to quickly find what you are looking for.
For example, say I have a paper on a particular way of using confidence labels in ML and I'm trying to figure out if I can use this paper to write better code for an autonomous vehicle system. I will first read the title and then the abstract and I will look up any words I don't absolutely understand. For example, are the terms "class label" or "confidence label" terms of art that have very specific definitions? I didn't know, so I googled "'class label' machine learning" and clicked the first link on stack overflow. Continue doing this until you understand what is happening. If you are even the slightest bit unsure as to what something means, look it up. Use wikipedia and reddit as resources -- they are generally excellent. But don't just try to read this paper like a novel or anything like that. These are primarily utilities for most people. That said, there are a select few who read these specifically to critique them and find holes in the science. That generally requires subject matter expertise and a close reading from start to finish, but I won't go further into that since most people don't do that.
Second, have a decent mathematical foundation, particularly in linear algebra, statistics, and calculus. You don't need to remember every proof or algorithm or even be able to do all of the math yourself, but you need to be able to read an equation and understand how the variables relate to one another in the context of the function. Also, learn about Markov chains, Bayesian statistics and probability, and decision trees. You can get deep into the weeds here with graph theory, topology, PDEs, and even game theory mathematics, but for a foundation you really just need the top few things I mentioned here.
Third, develop an understanding of computers, both programmatically and from the hardware side. There is a lot here, but it helps to understand how memory is allocated to various programs, how parallel programming concepts work, and how various algorithms and/or data structures can optimize certain use cases. Understanding these aspects will help you get a deeper understanding of why certain aspects are being proposed over others. From the hardware side, it's helpful to understand the physical limits of different sensors/approaches in computer vision for example and how those limits can interact with other aspects of your code or be overcome through the use of additional hardware implementations (e.g., adding a magnetometer to a drone that is programmed for autonomous flight so as to account for location relative to the Earth's magnetic field).
Fourth, familiarize yourself with AI/ML concepts. The youtube channel 3Blue1Brown has an excellent video series on Neural Networks and ML/AI to start you off. Once you understand the general ideas and how they operate, slowly read through the wikipedia entry on Machine learning. In particular, look at the "Approaches" section, and feel free to follow linked articles from there. If you want to get into deeper issues at the most advanced levels, go to university websites and look for faculty in CS/ML/AI and then look at their research interests. Or, if you want to bridge the gap between wikipedia and those faculty research papers, read relevant chunks of the "Deep Learning" book written by Ian Goodfellow (the genius son of a bitch who invented generative adversarial networks -- or GANs -- when he was like 28).
If you do even most of those above, you will have an EXCELLENT foundation for reading these papers. After that, it's just practice to get faster. I've been doing it for years and I probably read 20-30 papers in this field on an average day. Sometimes it takes me a couple of minutes to find what I need and sometimes I will try to read the paper for days or weeks on end (if it's the rare time when I need to understand absolutely everything about the paper).
Anyway, I hope that helps.
View HistoryShare Link