Post-doc : Development of algorithms for decentralized, resilient federated learning.

Updated: about 2 months ago
Job Type: FullTime
Deadline: 15 Jun 2021

Offer description

The postdoctoral fellow will join the Carnot FANTASTYC project which puts together researchers on distributed ledger technology, privacy and machine learning with the aim of developing software assets for decentralized, privacy-preserving and resilient federated learning.

Federated learning (FL) is a machine learning setting  in which many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized (communicating only the model parameters) [1]. Hence, in traditional federated learning, a central server orchestrates the training process and receives the contributions of all clients and hence represents a single point of failure and/or a communication bottleneck. Against this background, the first objective of this fellowship is to envisage a fully decentralized efficient version of the federated learning, replacing communication with the server by peer-to-peer communication between individual clients on some communication graph. Note that in this peer-to-peer setting there is no longer a global state of the model, but the process can be designed such that all local models converge to the desired global solution, i.e., the individual models gradually reach consensus. On doing that, the successful applicant is expected to tackle some of the open challenges that involve passing to decentralized learning, including: (1) the design, specification and implementation of efficient decentralised learning protocols; (2) the evaluation of communication and computational costs of protocols on different network topologies possibly leading to the design of new resource-aware distributed learning protocols; and (3) tackling the compromise between generic and personalized models depending on the evaluated non-IID of data distributions available to individual clients (e.g. different models for clusters of participants). For the design and implementation of the distributed framework the post-doc is expected to collaborate with other CEA labs involved in the project that will be providing a privacy-preserving distributed ledger technology infrastructure. The other focus of this position will be the study of the robustness of distributed federated learning against the presence of malicious participants (i.e. Byzantine attacks) [2,3].

The application domain envisaged in the project is personalized privacy-preserving health monitoring.

[1] Google AI blog: https://ai.googleblog.com/2017/04/federated-learning-collaborative.html

[2] Blanco-Justicia, A., Domingo-Ferrer, J., Martínez, S., Sánchez, D., Flanagan, A., & Tan, K. E. (2020). Achieving Security and Privacy in Federated Learning Systems: Survey, Research Challenges and Future Directions. arXiv preprint arXiv:2012.06810.

[3] Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, Julien Stainer: Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent. NIPS 2017: 119-129

Application and contacts

To apply send an updated CV and a motivation letter to:

Cédric Gouy-Pailler (cedric.gouy-pailler@cea.fr )

Aurélien Mayoue (aurelien.mayoue@cea.fr )

Meritxell Vinyals (meritxell.vinyals@cea.fr )


View or Apply

Similar Positions