ML_101
  • Introduction
  • ML Fundamentals
    • Basics
    • Optimization
    • How to prevent overfitting
    • Linear Algebra
    • Clustering
    • Calculate Parameters in CNN
    • Normalization
    • Confidence Interval
    • Quantization
  • Classical Machine Learning
    • Basics
    • Unsupervised Learning
  • Neural Networks
    • Basics
    • Activation function
    • Different Types of Convolution
    • Resnet
    • Mobilenet
  • Loss
    • L1 and L2 Loss
    • Hinge Loss
    • Cross-Entropy Loss
    • Binary Cross-Entropy Loss
    • Categorical Cross-Entropy Loss
    • (Optional) Focal Loss
    • (Optional) CORAL Loss
  • Computer Vision
    • Two Stage Object Detection
      • Metrics
      • ROI
      • R-CNN
      • Fast RCNN
      • Faster RCNN
      • Mask RCNN
    • One Stage Object Detection
      • FPN
      • YOLO
      • Single Shot MultiBox Detector(SSD)
    • Segmentation
      • Panoptic Segmentation
      • PSPNet
    • FaceNet
    • GAN
    • Imbalance problem in object detection
  • NLP
    • Embedding
    • RNN
    • LSTM
    • LSTM Ext.
    • RNN for text prediction
    • BLEU
    • Seq2Seq
    • Attention
    • Self Attention
    • Attention without RNN
    • Transformer
    • BERT
  • Parallel Computing
    • Communication
    • MapReduce
    • Parameter Server
    • Decentralized And Ring All Reduce
    • Federated Learning
    • Model Parallelism: GPipe
  • Anomaly Detection
    • DBSCAN
    • Autoencoder
  • Visualization
    • Saliency Maps
    • Fooling images
    • Class Visualization
Powered by GitBook
On this page
  • Motivating Examples
  • What is federated learning
  • Let us recall parallel gradient descent
  • Federated Averaging Algorithm
  • Computation vs. Communication
  • References

Was this helpful?

  1. Parallel Computing

Federated Learning

PreviousDecentralized And Ring All ReduceNextModel Parallelism: GPipe

Last updated 3 years ago

Was this helpful?

Motivating Examples

What is federated learning

Federated learning , [1] is [2] a kind of distributed learning.

How does federated learning differ from traditional distributed learning?

  1. Users have control over their device and data.

  2. Worker nodes are unstable.

  3. Communication cost is higher than computation cost.

  4. Data stored on worker nodes are not IID.

  5. The amount of data is severely imbalanced.

Let us recall parallel gradient descent

Federated Averaging Algorithm

Computation vs. Communication

References

  • [1] McMahan and others: Communication-efficient learning of deep networks from decentralized data. In AISTATS, 2017. .

  • [2] Konevcny, McMahan, and Ramage: Federated optimization: distributed optimization beyond the datacenter. arXiv:1511.03575, 2015

Slides
Youtube
federated_learning_1
federated_learning_2
federated_learning_3
federated_learning_4
federated_learning_5
federated_learning_6
federated_learning_7
federated_learning_8