Глубинное обучение (курс лекций)/2019

Материал из MachineLearning.

(Различия между версиями)
Перейти к: навигация, поиск
(Rules and grades)
(47 промежуточных версий не показаны.)
Строка 1: Строка 1:
__NOTOC__
__NOTOC__
-
This is introductory course on deep learning models and their application for solving different problems of image and text analysis.
+
This is an introductory course on deep learning models and their application for solving different problems of image and text analysis.
'''Instructors''': [[Участник:Kropotov|Dmitry Kropotov]], [[Участник:Victor Kitov|Victor Kitov]], Nadezhda Chirkova, Oleg Ivanov and Evgeny Nizhibitsky.
'''Instructors''': [[Участник:Kropotov|Dmitry Kropotov]], [[Участник:Victor Kitov|Victor Kitov]], Nadezhda Chirkova, Oleg Ivanov and Evgeny Nizhibitsky.
-
E-mail for questions: ''bayesml@gmail.com''. Please include in subject the tag [CMC DL19].
+
The timetable in Autumn 2019: Mondays, lectures begin at 10-30, seminars begin at 12-15, room 526b.
-
The timetable in Spring 2019: Fridays, room 685, lectures begin at 14-35, seminars begin at 16-20.
+
For questions: [https://t.me/joinchat/DEBCqg81GIyo4YNkdQtqzw course chat in Telegram]
-
== Announcements ==
+
== News ==
 +
 
 +
'''09 Sep:''' Today's lecture is cancelled. Seminar will start normally at 12-15.
 +
 
 +
'''06 Sep:''' First theoretical assignment is uploaded to anytask. Deadline: '''15 Sep'''. Please note: this is a strict deadline, no delay is possible.
 +
 
 +
'''01 Oct:''' Second practical assignment is uploaded to anytask. Deadline: '''15 Oct'''.
== Rules and grades ==
== Rules and grades ==
-
We have 5 practical assignments during the course. For each assignment a student may get up to 5 points. A student is allowed to upload his fulfilled assignment during one week after deadline with grade reduction of 0.1 points per day.
+
We have 7 home assignments during the course. For each assignment, a student may get up to 10 points + possibly bonus points. For some assignments a student is allowed to upload his fulfilled assignment during one week after deadline with grade reduction of 0.5 points per day. All assignments are prepared in English.
-
The final grade for the course is calculated as follows: Round-up (0.3*<Exam_grade> + 0.7*<Semester_grade>). For the final grade 5 it is necessary to fulfill all practical assignments and get >= 4 exam grade. For the final grade 4 it necessary to fulfill at least 4 practical assignments and get >= 3 exam grade. For the final grade 3 it is necessary for fulfill at least 3 practial assignments and get >=3 exam grade.
+
Also each student may give a small 10-minutes talk in English on some recent DL paper. For this talk a student may get up to 5 points.
 +
 
 +
The total grade for the course is calculated as follows: Round-up (0.3*<Exam_grade> + 0.7*<Semester_grade>), where <Semester_grade> = min(10, (<Assignments_total_grade> + <Talk_grade>) / 7), <Exam_grade> is a grade for the final exam (up to 10 points).
 +
{| class="standard"
 +
!Final grade !! Total grade !! Necessary conditions
 +
|-
 +
| 5 || >=8 || all practical assignments are done, exam grade >= 6 and oral talk is given
 +
|-
 +
| 4 || >=6 || 6 practical assignments are done, exam grade >= 4
 +
|-
 +
| 3 || >=4 || 3 practical assignments are done, exam grade >= 4
 +
|-
 +
|}
== Practical assignments ==
== Practical assignments ==
-
Practical assignments are provided on course page in ''anytask.org''. Invite code: ????
+
Practical assignments are provided on course page in ''anytask.org''. Invite code: IXLOwZU
== Lectures ==
== Lectures ==
Строка 23: Строка 41:
!Date !! No. !! Topic !! Materials
!Date !! No. !! Topic !! Materials
|-
|-
-
| 15&nbsp;Feb.&nbsp;2019 || align="center"|1 || Introduction. Automatic differentiation. ||
+
| 02&nbsp;Sep.&nbsp;2019 || align="center"|1 || Introduction. Fully-connected networks. ||
|-
|-
-
| 22&nbsp;Feb.&nbsp;2019 || align="center"|2 || Optimization and regularization methods for neural networks ||
+
| 16&nbsp;Sep.&nbsp;2019 || align="center"|2 || Optimization and regularization for neural networks. Convolutional neural networks. || [https://drive.google.com/file/d/1d2Pn4Lb15rQ1p-z1SeOzuh9kmvSV4d3I/view?usp=sharing Presentation (pptx)]<br> [https://drive.google.com/file/d/1sKdXAc1iSSNy4ZrRaPs2UXuWHZE29Bnc/view?usp=sharing Presentation (pdf)]<br> [https://arxiv.org/pdf/1412.6980.pdf A paper about ADAM]<br> [http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf A paper about DropOut]<br> [https://arxiv.org/pdf/1502.03167.pdf A paper about BatchNorm]
|-
|-
-
| 01&nbsp;Mar.&nbsp;2019 || align="center"|3 || Convolutional neural networks for image classification problem ||
+
| 23&nbsp;Sep.&nbsp;2019 || align="center"|3 || Semantic image segmentation || [https://yadi.sk/i/qaR_c-9fE0G0Nw Presentation (pdf)]<br>[https://portrait.nizhib.ai/ Portrait Demo] ([https://github.com/nizhib/portrait-demo source])
|-
|-
-
| 15&nbsp;Mar.&nbsp;2019 || align="center"|4 || Convolutional neural networks for image segmentation problem ||
+
| 30&nbsp;Sep.&nbsp;2019 || align="center"|4 || Object detection on images || [https://yadi.sk/i/WN0IV9TKBkvh5A Presentation (pdf)]<br>[https://yadi.sk/i/WN0IV9TKBkvh5A DS Bowl 2018 report (pdf)]
|-
|-
-
| 22&nbsp;Mar.&nbsp;2019 || align="center"|5 || Object detection and localization on images ||
+
| 07&nbsp;Oct.&nbsp;2019 || align="center"|5 || Image style transfer || Presentations [https://yadi.sk/i/1wCKBk6nt3IeLA 1], [https://yadi.sk/i/hl9i5Y1fuPr4Qg 2], [https://yadi.sk/i/eDHSVCRPNTxLpA 3], [https://yadi.sk/i/v-Oc7h8eTgIN-A 4], [https://yadi.sk/i/xwGRhEXa3cl9uA 5], [https://yadi.sk/i/rXAgWiDTLzgnsQ 6].
|-
|-
-
| 29&nbsp;Mar.&nbsp;2019 || align="center"|6 || Image style transfer ||
+
| 14&nbsp;Oct.&nbsp;2019 || align="center"|6 || Recurrent neural networks || [https://drive.google.com/file/d/1KvSzzctOjRhYwJH_9LJJeZhMp4USTcDV/view?usp=sharing Presentation (pdf)]
|-
|-
-
| 05&nbsp;Apr.&nbsp;2019 || align="center"|7 || Recurrent neural networks ||
+
| 21&nbsp;Oct.&nbsp;2019 || align="center"|7 || Attention and memory in recurrent neural networks ||
|-
|-
-
| 12&nbsp;Apr.&nbsp;2019 || align="center"|8 || Attention mechanism ||
+
| 28&nbsp;Oct.&nbsp;2019 || align="center"|8 || Variational autoencoder ||
|-
|-
-
| 19&nbsp;Apr.&nbsp;2019 || align="center"|9 || Generative adversarial networks ||
+
| 11&nbsp;Nov.&nbsp;2019 || align="center"|9 || Generative adversarial networks ||
|-
|-
-
| 26&nbsp;Apr.&nbsp;2019 || align="center"|10 || Riemannian optimization ||
+
| 18&nbsp;Nov.&nbsp;2019 || align="center"|10 || Reinforcement learning. Q-learning, DQN. ||
|-
|-
-
| 17&nbsp;May&nbsp;2019 || align="center"|11 || ||
+
| 25&nbsp;Nov.&nbsp;2019 || align="center"|11 || Policy gradient in reinforcement learning ||
 +
|-
 +
| 02&nbsp;Dec.&nbsp;2019 || align="center"|12 || Implicit reparameterization trick. Gumbel-Softmax approach for discrete reparameterization. ||
 +
|-
 +
| 09&nbsp;Dec.&nbsp;2019 || align="center"|13 || Students' talks ||
|-
|-
|}
|}
Строка 50: Строка 72:
{| class="standard"
{| class="standard"
-
!Date !! No. !! Topic !! Materials
+
!Date !! No. !! Topic !! Need laptops !! Materials
 +
|-
 +
| 2&nbsp;Sep.&nbsp;2019 || align="center"|1 || Matrix calculus, automatic differentiation. || No || [https://drive.google.com/file/d/1Yu790uIPyxp9JIyysxfJDor_LJQu83gQ/view?usp=sharing Synopsis]<br> [https://people.maths.ox.ac.uk/gilesm/files/NA-08-01.pdf pdf]
 +
|-
 +
| 9&nbsp;Sep.&nbsp;2019 || align="center"|2 || Introduction to Pytorch || Yes || [https://github.com/nadiinchi/dl_labs/blob/master/lab_pytorch.ipynb ipynb]
 +
|-
 +
| 16&nbsp;Sep.&nbsp;2019 || align="center"|3 || Convolutional neural networks on Pytorch || Yes || [https://github.com/nadiinchi/dl_labs/blob/master/lab_cnn_english.ipynb ipynb]
|-
|-
-
| 15&nbsp;Feb.&nbsp;2019 || align="center"|1 || Automatic differentiation. ||
+
| 23&nbsp;Sep.&nbsp;2019 || align="center"|4 || Semantic image segmentation || No ||
|-
|-
-
| 22&nbsp;Feb.&nbsp;2019 || align="center"|2 || Introduction to Azure and Pytorch ||
+
| 30&nbsp;Sep.&nbsp;2019 || align="center"|5 || Face recognition || No ||
|-
|-
-
| 01&nbsp;Mar.&nbsp;2019 || align="center"|3 || Convolutional neural networks for MNIST ||
+
| 07&nbsp;Oct.&nbsp;2019 || align="center"|6 || Image style transfer || No ||
|-
|-
-
| 15&nbsp;Mar.&nbsp;2019 || align="center"|4 || Deep learning contests ||
+
| 14&nbsp;Oct.&nbsp;2019 || align="center"|7 || DropOut for recurrent neural networks || Yes || [https://github.com/nadiinchi/dl_labs/blob/master/lab_rnn_english.ipynb ipynb]
|-
|-
-
| 22&nbsp;Mar.&nbsp;2019 || align="center"|5 || Face recognition ||
+
| 21&nbsp;Oct.&nbsp;2019 || align="center"|8 || Models with attention || Yes ||
|-
|-
-
| 29&nbsp;Mar.&nbsp;2019 || align="center"|6 || Image style transfer ||
+
| 28&nbsp;Oct.&nbsp;2019 || align="center"|9 || Variational autoencoders, adversarial attacks || Yes ||
|-
|-
-
| 05&nbsp;Apr.&nbsp;2019 || align="center"|7 || Recurrent neural networks ||
+
| 11&nbsp;Nov.&nbsp;2019 || align="center"|10 || Generative adversarial networks || Yes ||
|-
|-
-
| 12&nbsp;Apr.&nbsp;2019 || align="center"|8 || Attention mechanism ||
+
| 18&nbsp;Nov.&nbsp;2019 || align="center"|11 || Bandits || No ||
|-
|-
-
| 19&nbsp;Apr.&nbsp;2019 || align="center"|9 || Generative adversarial networks ||
+
| 25&nbsp;Nov.&nbsp;2019 || align="center"|12 || Actor-critic approach in RL || No ||
|-
|-
-
| 26&nbsp;Apr.&nbsp;2019 || align="center"|10 || Riemannian optimization ||
+
| 02&nbsp;Dec.&nbsp;2019 || align="center"|13 || Semi-supervised discrete VAE || Yes ||
|-
|-
-
| 17&nbsp;May&nbsp;2019 || align="center"|11 || ||
+
| 09&nbsp;Dec.&nbsp;2019 || align="center"|14 || Students' talks || No ||
|-
|-
|}
|}

Версия 11:04, 16 октября 2019

This is an introductory course on deep learning models and their application for solving different problems of image and text analysis.

Instructors: Dmitry Kropotov, Victor Kitov, Nadezhda Chirkova, Oleg Ivanov and Evgeny Nizhibitsky.

The timetable in Autumn 2019: Mondays, lectures begin at 10-30, seminars begin at 12-15, room 526b.

For questions: course chat in Telegram

News

09 Sep: Today's lecture is cancelled. Seminar will start normally at 12-15.

06 Sep: First theoretical assignment is uploaded to anytask. Deadline: 15 Sep. Please note: this is a strict deadline, no delay is possible.

01 Oct: Second practical assignment is uploaded to anytask. Deadline: 15 Oct.

Rules and grades

We have 7 home assignments during the course. For each assignment, a student may get up to 10 points + possibly bonus points. For some assignments a student is allowed to upload his fulfilled assignment during one week after deadline with grade reduction of 0.5 points per day. All assignments are prepared in English.

Also each student may give a small 10-minutes talk in English on some recent DL paper. For this talk a student may get up to 5 points.

The total grade for the course is calculated as follows: Round-up (0.3*<Exam_grade> + 0.7*<Semester_grade>), where <Semester_grade> = min(10, (<Assignments_total_grade> + <Talk_grade>) / 7), <Exam_grade> is a grade for the final exam (up to 10 points).

Final grade Total grade Necessary conditions
5 >=8 all practical assignments are done, exam grade >= 6 and oral talk is given
4 >=6 6 practical assignments are done, exam grade >= 4
3 >=4 3 practical assignments are done, exam grade >= 4

Practical assignments

Practical assignments are provided on course page in anytask.org. Invite code: IXLOwZU

Lectures

Date No. Topic Materials
02 Sep. 2019 1 Introduction. Fully-connected networks.
16 Sep. 2019 2 Optimization and regularization for neural networks. Convolutional neural networks. Presentation (pptx)
Presentation (pdf)
A paper about ADAM
A paper about DropOut
A paper about BatchNorm
23 Sep. 2019 3 Semantic image segmentation Presentation (pdf)
Portrait Demo (source)
30 Sep. 2019 4 Object detection on images Presentation (pdf)
DS Bowl 2018 report (pdf)
07 Oct. 2019 5 Image style transfer Presentations 1, 2, 3, 4, 5, 6.
14 Oct. 2019 6 Recurrent neural networks Presentation (pdf)
21 Oct. 2019 7 Attention and memory in recurrent neural networks
28 Oct. 2019 8 Variational autoencoder
11 Nov. 2019 9 Generative adversarial networks
18 Nov. 2019 10 Reinforcement learning. Q-learning, DQN.
25 Nov. 2019 11 Policy gradient in reinforcement learning
02 Dec. 2019 12 Implicit reparameterization trick. Gumbel-Softmax approach for discrete reparameterization.
09 Dec. 2019 13 Students' talks

Seminars

Date No. Topic Need laptops Materials
2 Sep. 2019 1 Matrix calculus, automatic differentiation. No Synopsis
pdf
9 Sep. 2019 2 Introduction to Pytorch Yes ipynb
16 Sep. 2019 3 Convolutional neural networks on Pytorch Yes ipynb
23 Sep. 2019 4 Semantic image segmentation No
30 Sep. 2019 5 Face recognition No
07 Oct. 2019 6 Image style transfer No
14 Oct. 2019 7 DropOut for recurrent neural networks Yes ipynb
21 Oct. 2019 8 Models with attention Yes
28 Oct. 2019 9 Variational autoencoders, adversarial attacks Yes
11 Nov. 2019 10 Generative adversarial networks Yes
18 Nov. 2019 11 Bandits No
25 Nov. 2019 12 Actor-critic approach in RL No
02 Dec. 2019 13 Semi-supervised discrete VAE Yes
09 Dec. 2019 14 Students' talks No

Arxiv

2017

2016

Личные инструменты