Short Bio

I'm currently a first year PhD Student in EECS, affliated with Redwood Center for Theoretical Neuroscience and BAIR at UC Berkeley, advised by Prof. Bruno Olshausen and Prof. Bin Yu. I also work closely with Prof. Yubei Chen at UC Davis. My research focuses on understanding the neural computation principle: the principle behinds the representation formed in neural networks. Previously, I was born and raised in Jinan and Qingdao, China. I came to US alone at aged 15 and graduated at Clovis North High School. I got both my bachelar (Math and CS) and master degree (EECS) in UC Berkeley. During my undergradute and master, I'm very fortunate to be advised by Prof. Bruno Olshausen and Prof. Yubei Chen.
I also spent a large amount of my time in NYC. Feel free to email me at chobitstian@berkeley.edu if would love to chat about research online or grab a coffee in person.

Publications

URLOST: Unsupervised representation learning without stationarity or topology

URLOST: Unsupervised representation learning without stationarity or topology

[arXiv]
ICLR 2025
We developed an unsupervised learning model for generic high dimensional data. The model demonstrated exceptional performance across diverse data modalities from neural recording to gene expression.
Predictive and Invariant Representations via Motion and Form Factorization in Natural Scenes

Predictive and Invariant Representations via Motion and Form Factorization in Natural Scenes

Cosyne 2025
We applied the principles of sparsity and temporal consistency to build a factorizable representations for natural scenes.
Denoising for Manifold Extrapolation

Denoising for Manifold Extrapolation

NeurIPS SciForDL workshop 2024
We applied the principles of sparsity and temporal consistency to build a factorizable representations for natural scenes.
Natural retinal cone distributions emerge from optical and neural limits to vision

Natural retinal cone distributions emerge from optical and neural limits to vision

VSS 2024
We model visual system with a chromatic aberration constrained optical simulation and designed a learnable cone mosaic sampling. We show that the model’s emerged cone mosaic resembles a cone mosaic found in humans, when optimized for both spatial acuity and color acuity.
Minimalistic unsupervised representation learning with the sparse manifold transform

Minimalistic unsupervised representation learning with the sparse manifold transform

[arXiv]
ICLR 2023 (spotlight)
Constructed a two-layer interpretable unsupervised learning model based on principles from neural computation. The model remains competitive with deep learning methods.
Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors

Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors

[arXiv] [Demo]
NAACL DeeLIO Workshop 2021
We used sparse coding to visualize Large Language Models (LLMs). By adopting neural computation principle, we greatly improve the interpretability of LLMs. Our result implies LLMs compute in a manner akin to the human brain, where it builds representations to predict upcoming language at multiple levels of abstraction.

Work under preperation

Stability-driven design and denoising of sparse autoencoders

Stability-driven design and denoising of sparse autoencoders

in preparation
We proposed a stability-driven autoencoder for unsuperised learning. The method is used for extracting gene expression patterns.