My research interests lie in the areas of information theory, statistics, and machine learning. My current focus is on understanding the implicit regularization and stability properties of neural network optimization algorithms.

In my PhD, I developed new methods to decompose information into parts that allow for a fine-grained analysis of how information is distributed over composite systems consisting of multiple interacting parts or subsystems. These methods are potentially useful in applications ranging from neuroscience and representation learning, to robotics, and cryptography.

See my Google Scholar page for an updated list of publications.

Service: Reviewer for ICML, ISIT, IEEE Transactions on Neural Networks and Learning Systems.

I co-organize the Math Machine Learning seminar MPI MiS + UCLA with Guido Montúfar.