Prof. Richtarik's research interests lie at the intersection of mathematics, computer science, machine learning, optimization, numerical linear algebra, high performance computing and applied probability. He is interested in developing zero, first, and second-order algorithms for convex and nonconvex optimization problems described by big data, with a particular focus on randomized, parallel and distributed methods. He is the co-inventor of federated learning.
Selected Publications
Vladimir Malinovskii, Denis Mazur, Ivan Ilin, Denis Kuznedelev, Konstantin Burlachenko, Kai Yi, Dan Alistarh, and Peter Richtárik
PV-Tuning: Beyond straight-through estimation for extreme LLM compression
Advances in Neural Information Processing Systems 38 (NeurIPS 2024)
Peter Richtárik, Elnur Gasanov, Konstantin Burlachenko
Error feedback reloaded: From quadratic to arithmetic mean of smoothness constants
12th International Conference on Learning Representations (ICLR 2024)
Alexander Tyurin and Peter Richtárik
Optimal time complexities of parallel stochastic optimization methods under a fixed computation model
Advances in Neural Information Processing Systems 36 (NeurIPS 2023)
Ilyas Fatkhullin, Alexander Tyurin and Peter Richtárik
Momentum provably improves error feedback!
Advances in Neural Information Processing Systems 36 (NeurIPS 2023)
Yury Demidovich, Grigory Malinovsky, Igor Sokolov and Peter Richtárik
A guide through the zoo of biased SGD
Advances in Neural Information Processing Systems 36 (NeurIPS 2023)
Kaja Gruntkowska, Alexander Tyurin and Peter Richtárik
EF21-P and friends: Improved theoretical communication complexity for distributed optimization with bidirectional compression
40th International Conference on Machine Learning (ICML 2023)