Short Bio

I was born and raised in Jinan and Qingdao, China. I came to US alone at aged 15 and graduated at Clovis North High School. I graduated from UC Berkeley with a double major in Computer Sciecne and Mathmetics with highest distinction. During my undergradute, I'm very fortunate to be advised by Prof. Bruno Olshausen and Prof. Yubei Chen. I'm also very fortunate to be member of Redwood Center for Theoretical Neuroscience. I'm currently a M.S. Student in EECS at UC Berkeley, advised by Prof. Bruno Olshausen. My research focuses on understanding the principle of decomposition and facrtorization in neural computation.
Feel free to email me at chobitstian@berkeley.edu if you are interested about my research or anything else!

Publications

URLOST: Unsupervised representation learning without stationarity or topology

URLOST: Unsupervised representation learning without stationarity or topology

[arXiv]
ICLR 2024 (under review)
We developed an unsupervised learning model for generic high dimensional data. The model demonstrated exceptional performance across diverse data modalities from neural recording to gene expression.
Minimalistic unsupervised representation learning with the sparse manifold transform

Minimalistic unsupervised representation learning with the sparse manifold transform

[arXiv]
ICLR 2023 (spotlight)
Constructed a two-layer interpretable unsupervised learning model based on principles from neural computation. The model remains competitive with deep learning methods.
Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors

Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors

[arXiv] [Demo]
NAACL DeeLIO Workshop 2021
We used sparse coding to visualize Large Language Models (LLMs). By adopting neural computation principle, we greatly improve the interpretability of LLMs. Our result implies LLMs compute in a manner akin to the human brain, where it builds representations to predict upcoming language at multiple levels of abstraction.

Work under preperation

Natural retinal cone distributions emerge from optical and neural limits to vision

Natural retinal cone distributions emerge from optical and neural limits to vision

VSS 2024 in submission
We model visual system with a chromatic aberration constrained optical simulation and designed a learnable cone mosaic sampling. We show that the model’s emerged cone mosaic resembles a cone mosaic found in humans, when optimized for both spatial acuity and color acuity.
Factorizing motion and form via motion straightening and resonator network

Factorizing motion and form via motion straightening and resonator network

in preparation
We applied the principles of sparsity and slowness to build representations for natural scenes. Then used coupled Hopfield network dyanmic to learn motion patterns and shape patterns.
Stability-driven design and denoising of sparse autoencoders

Stability-driven design and denoising of sparse autoencoders

in preparation
We proposed a stability-driven autoencoder for unsuperised learning. The method is used for extracting gene expression patterns.