Muxiao Liu

Research, PhD, and Interdiscipline

Inspired by a conversation with Leo

Note · published Version: v1.0.1a (historical) Created Dec 2, 2025 Updated Dec 10, 2025 Visibility: public

Research, PhD, and Interdiscipline

Over Thanksgiving break, I was visiting San Francisco. My mentor Leo hasn’t replied to my email for a whole summer, so I thought I might shoot him a short email, ask if we could meet up, and thank him for all his help along the way.

It was indeed a long and fruitful journey.

Months of library time, especially interning in CSC, were dedicated to this project (For someone who has NEVER sat in the library in my 3 years of high school life, wow). I started from the most basic neural network, to CNNs, to SNNs…

Well, he replied, and we ended up spending an entire morning in Stanford and shared lunch. Here are some interesting topics and observations I picked up.

Pursuing a PhD

Pursuing a PhD means spending 5 years of your life doing nothing but research.
Pasted image 20251208105058.png
Ideal path for one pursuing a PhD degree in Leo’s view. Graph generated with Nano Banana.

That means the decisions and predictions to make are whether I can make enough learning in these years compared to industrial experiences, whether I am willing to spend the first ~40 years of my life below the curve, and whether my growth after the PhD will be faster than otherwise.

Research

The market undergoes cycles between research and scaling. Five years ago, if you submitted a proposal to develop a hardware chip that is 10% faster than the current chip, you would not get any grant - Moore’s Law already makes that improvement by default. The market doesn’t need a risky attempt that does what’s already been given.

Today, Moore’s Law is breaking. The physical properties of chips prohibit any smaller chips from performing well at a reasonable cost. This is not necessarily a technological problem - we have Scanning Tunneling Microscopy today that can place an individual atom where we need it to be. It is simply unrealistically expensive for the mass production of chips. There are also engineering challenges since transistors start to encounter the quantum tunneling effect at the single-digit nanometer scale. Not to mention the thermal limits of putting the same amount of circuits in a smaller space with less heat-dispatchable surface area.

“So it’s back to the age of research again.” - Ilya Sutskever

That is why he believes this is a good time for research, especially in computational hardware. Leo is working on an event-based camera, specifically edge processing for cameras (this is more “edge” than the typical edge computing we talk about; it is about making processing physically happen right behind the sensors).

Interdiscipline

I also got to ask Leo about one of my most curious questions about this lab:

Do you actually work with researchers in neuroscience?

Yes, there are researchers focusing on neuroscience, algorithms, machine learning, hardware, etc., in this lab. This is conceptually cool and futuristic - the interdisciplinary research lab where we bring together frontier brain sciences with computer sciences. It can definitely be argued that there are analogies to be pulled between machine learning and the brain. But how would these collaborations happen? To what extent do they happen? Is neuroscience merely inspirational? Or perhaps practically useful in the development of their work?

Leo said he always works with the neuroscientists. A typical workflow looks like this:
The neuroscientist, through experiments and observation, proposes a new model.
The algorithm guy emulates these processes in code, trying to prove that the model actually works, and evaluate its performance.
The hardware guy takes the code and sees how to make the chip that suits the need. He would need to worry about impedance, but probably not the time complexity of the inference.
During this process, they talk to each other, figure out when to diverge from biology and utilize the advantages of silicon, and when to stick to the brain models to address bottlenecks that typical chips would face.

Leo did not answer the question of whether neuromorphic hardware has a tendency to be “better or worse” than traditional hardware. Having thought of it more, that’s not a meaningful question at all. In the face of challenges of scaling, any research attempt is valuable, and I do agree, biology is the best reference to use when there is nothing else out there. The history of invention is the history of building things that replicate (and then exceed) parts of ourselves. In that sense, invention is an internal exploration, while research is an external one. While we have been victorious in that first journey, there is still one last thing we cannot replicate: the brain.

Comments

Posting anonymously (you can sign in from the top-left corner!)