Research

My research interests lie in the areas of information theory, statistics, and deep learning. My current focus is on understanding the implicit regularization and stability properties of neural network optimization algorithms. I am also interested in information-theoretic aspects of control and unsupervised reinforcement learning.


In my PhD, I developed new methods to decompose information into parts that allow for a fine-grained analysis of how information is distributed over composite systems consisting of multiple interacting parts or subsystems. These methods are potentially useful in applications ranging from neuroscience and representation learning, to robotics, and cryptography.


See my Google Scholar page for an updated list of publications.


Service: Reviewer for NeurIPS (Top 8%, 2021), ICML, ICLR, ISIT, IEEE TNNLS.

I co-organize the Math Machine Learning Seminar MPI MiS + UCLA with Guido Montúfar.