My recent research has been focused on learning structured representations of the world from images and video.
-
Neural Language of Thought Models
[paper]
Yi-Fu Wu, Minseung Lee, Sungjin Ahn
ICLR 2024
Also appeared as "Object-Centric Semantic Vector Quantization" in: Causal Representation Learning Workshop and Unifying Representations in Neural Models Workshop at NeurIPS 2023
-
Inverted-Attention Transformers Can Learn Object Representations: Insights From Slot Attention
[paper]
Yi-Fu Wu, Klaus Greff, Gamaleldin F. Elsayed, Michael C. Mozer, Thomas Kipf, Sjoerd van Steenkiste
Causal Representation Learning Workshop and Unifying Representations in Neural Models Workshop at NeurIPS 2023
-
An Investigation into Pre-Training Object-Centric Representations for Reinforcement Learning
[website]
[paper]
Jaesik Yoon, Yi-Fu Wu, Heechul Bae, Sungjin Ahn
ICML 2023
-
Simple Unsupervised Object-Centric Learning for Complex and Naturalistic Videos
[website]
[paper]
Gautam Singh, Yi-Fu Wu, Sungjin Ahn
NeurIPS 2022
-
TransDreamer: Reinforcement Learning with Transformer World Models
[paper]
Chang Chen, Jaesik Yoon, Yi-Fu Wu, Sungjin Ahn
Deep RL Workshop at NeurIPS 2021
-
Generative Video Transformer: Can Objects be the Words?
[paper]
Yi-Fu Wu, Jaesik Yoon, Sungjin Ahn
ICML 2021
-
Zhixuan Lin, Yi-Fu Wu, Skand, Bofeng Fu, Jindong Jiang, Sungjin Ahn
ICML 2020
-
SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition
[website]
[paper]
[code]
{Zhixuan Lin, Yi-Fu Wu, Skand, Weihao Sun}, Gautam Singh, Fei Deng, Jindong Jiang, Sungjin Ahn
ICLR 2020