Straggler-Resilient Personalized Federated Learning [pdf], I. Tziotis, Z. Shen, R. Pedarsani, H. Hassani, A. Mokhtari, Transactions on Machine Learning Research (TMLR), 2023. The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance [pdf], M. Faw, I. Tziotis, C. Caramanis, A. Mokhtari, S. Shakkottai , R. Ward , Conference on Learning Theory (COLT), 2022. Straggler-Resilient Federated Learning: Interplay Between Statistical Accuracy and System Heterogeneity [pdf], A. Reisizadeh, I. Tziotis, H. Hassani, A. Mokhtari, R. Pedarsani, IEEE Journal on Selected Areas in Information Theory (JSAIT), 2022. Adaptive Node Participation in Straggler-Resilient Federated Learning [pdf], A. Reisizadeh, I. Tziotis, H. Hassani, A. Mokhtari, R. Pedarsani, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022. Achieving Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking [pdf], I. Tziotis, C. Caramanis, A. Mokhtari, Neural Information Processing Systems (NeurIPS), 2020.
Objective Oriented Personalization in Federated Learning We propose and analyze personalization models for various objectives (maximum participation, maximum welfare, fairness) in the Bayesian Hierarchical setting for Mean Estimation and Federated Learning. Further, we derive experimental results illustrating the performance of different personalization models with respect to these objective functions. Demystifying the Price of System Heterogeneity in Federated Learning In this work we explore the trade-off between data and system heterogeneity in Federated Learning. We propose and analyze a novel, asynchronous method that adaptively selects the aggregation weights at every round based on the quality of the model, the speed, and the level of the clients' heterogeneity. Our experimental results indicate that our proposed scheme enjoys significant benefits in terms of accuracy and convegence speed compared to traditional federated learning baselines.