Участник:Beznosikov.an
Материал из MachineLearning.
(→Отчеты о научно-исследовательской работе) |
(→Отчеты о научно-исследовательской работе) |
||
Строка 11: | Строка 11: | ||
== Отчеты о научно-исследовательской работе == | == Отчеты о научно-исследовательской работе == | ||
+ | |||
+ | |||
+ | === Весна 2020, 8-й семестр=== | ||
+ | '''On Biased Compression for Distributed Learning''' | ||
+ | |||
+ | ''In the last few years, various communication compression techniques have emerged as an indispensable tool helping to alleviate the communication bottleneck in distributed learning. However, despite the fact {\em biased} compressors often show superior performance in practice when compared to the much more studied and understood {\em unbiased} compressors, very little is known about them. In this work we study three classes of biased compression operators, two of which are new, and their performance when applied to (stochastic) gradient descent and distributed (stochastic) gradient descent. We show for the first time that biased compressors can lead to linear convergence rates both in the single node and distributed settings. Our {\em distributed} SGD method enjoys the ergodic rate O(δLexp(−K)μ+(C+D)Kμ), where δ is a compression parameter which grows when more compression is applied, L and μ are the smoothness and strong convexity constants, C captures stochastic gradient noise (C=0 if full gradients are computed on each node) and D captures the variance of the gradients at the optimum (D=0 for over-parameterized models). Further, via a theoretical study of several synthetic and empirical distributions of communicated gradients, we shed light on why and by how much biased compressors outperform their unbiased variants. Finally, we propose a new highly performing biased compressor---combination of Top-k and natural dithering---which in our experiments outperforms all other compression techniques.'' | ||
+ | |||
+ | '''Публикация''' | ||
+ | *{{биб.статья | ||
+ | |автор = Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, Mher Safaryan | ||
+ | |заглавие = On Biased Compression for Distributed Learning | ||
+ | |издание = Arxiv preprint | ||
+ | |год = 2020 | ||
+ | |url = https://arxiv.org/abs/2002.12410 | ||
+ | }} | ||
+ | [https://arxiv.org/abs/2002.12410 Ссылка] | ||
+ | |||
+ | |||
=== Осень 2019, 7-й семестр=== | === Осень 2019, 7-й семестр=== |
Версия 11:27, 2 марта 2020
Содержание |
Безносиков Александр
МФТИ, ФУПМ, 674 группа
email: beznosikov.an@phystech.edu
Кафедра: "Интеллектуальные системы"
Направление: "Интеллектуальный анализ данных"
Отчеты о научно-исследовательской работе
Весна 2020, 8-й семестр
On Biased Compression for Distributed Learning
In the last few years, various communication compression techniques have emerged as an indispensable tool helping to alleviate the communication bottleneck in distributed learning. However, despite the fact {\em biased} compressors often show superior performance in practice when compared to the much more studied and understood {\em unbiased} compressors, very little is known about them. In this work we study three classes of biased compression operators, two of which are new, and their performance when applied to (stochastic) gradient descent and distributed (stochastic) gradient descent. We show for the first time that biased compressors can lead to linear convergence rates both in the single node and distributed settings. Our {\em distributed} SGD method enjoys the ergodic rate O(δLexp(−K)μ+(C+D)Kμ), where δ is a compression parameter which grows when more compression is applied, L and μ are the smoothness and strong convexity constants, C captures stochastic gradient noise (C=0 if full gradients are computed on each node) and D captures the variance of the gradients at the optimum (D=0 for over-parameterized models). Further, via a theoretical study of several synthetic and empirical distributions of communicated gradients, we shed light on why and by how much biased compressors outperform their unbiased variants. Finally, we propose a new highly performing biased compressor---combination of Top-k and natural dithering---which in our experiments outperforms all other compression techniques.
Публикация
- Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, Mher Safaryan On Biased Compression for Distributed Learning // Arxiv preprint. — 2020.
Осень 2019, 7-й семестр
Derivative-Free Method For Decentralized Distributed Non-Smooth Optimization
In this paper, we propose new derivative-free method which is based on the Sliding Algorithm from Lan (2016, 2019) for the convex composite optimization problem that includes two terms: smooth one and non-smooth one. We prove the convergence rate for the new method that matches the corresponding rate for the first-order method up to a factor proportional to the dimension of the space. We apply this method for the decentralized distributed optimization and prove the bounds for the number of communication rounds for this method that matches the lower bounds. We prove the bound for the number of zeroth-order oracle calls per node that matches the similar state-of-the-art bound for the first-order decentralized distributed optimization up to to the factor proportional to the dimension of the space.
Публикация
- Aleksandr Beznosikov, Eduard Gorbunov, Alexander Gasnikov Derivative-Free Method For Decentralized Distributed Non-Smooth Optimization // Arxiv preprint. — 2019.