# Курсы ## Курсы, которые надо пройти * [Deep Learning Google](https://eu.udacity.com/course/deep-learning--ud730) * [Machine Learning Crash Course with TensorFlow APIs](https://developers.google.com/machine-learning/crash-course/) * [Learning TensorFlow](https://learningtensorflow.com) * [Kaggel Learn](https://www.kaggle.com/learn/overview) * [Data Vizualization Kaggle](https://www.kaggle.com/learn/data-visualisation) * Andrew Ng * cs221 * Future Learning * [HSE Course (GitHub)](https://github.com/esokolov/ml-course-hse/) * [Microsoft Professional Program for Artificial Intelligence](https://academy.microsoft.com/en-us/professional-program/tracks/artificial-intelligence/) * [Blommberg ML](https://bloomberg.github.io/foml/#home) * [Toronto DL](http://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/) * [Elements of AI](https://course.elementsofai.com) * [Julia Scientific Programming Coursera](https://ru.coursera.org/learn/julia-programming) * [Stepic Julia](https://stepik.org/course/2407) ## Stepic Additional * [Examination](https://stepik.org/lesson/68008/step/1?unit=44971) * [Adaptive tasks](https://stepik.org/lesson/43732/step/1?adaptive=true&unit=22777) ## Курсы, что я прохожу * ODS ML Course Open * Deep NLP MIPT * cs224n (NLP) * cs231n (DeepLearning) * [Carnegie Melon Deep Learning Course](http://deeplearning.cs.cmu.edu) ## Курсы, что я уже прошел * [Летняя Школа. Мастерская Deep Learning](http://letnyayashkola.org/deeplearning/) * DataCamp: * [Intro to Python for Data Science](https://www.datacamp.com/courses/intro-to-python-for-data-science) * [Intermediate Python for Data Science](https://www.datacamp.com/courses/intermediate-python-for-data-science) * [Intro to SQL for Data Science](https://www.datacamp.com/courses/intro-to-sql-for-data-science) * [Joining Data in PostgreSQL](https://www.datacamp.com/courses/joining-data-in-postgresql) ## Где я участвовал как слушатель или как участник или как организатор. Школы, конференции, семинары [Летняя Школа. Мастерская Deep Learning](http://letnyayashkola.org/deeplearning/) [Data Fest^5, Москва,28 апреля 2018](http://datafest.ru/5/) ----- # Чтение ## Штуки, что надо прочитать * [How Numba and Cython speed up Python code](https://rushter.com/blog/numba-cython-python-optimization/) * [Serving machine learning models with RestServe on R](http://restrserve.org/serving-ml.html) * [R TensorFlow Tutorial](https://tensorflow.rstudio.com) * [R Keras Tutorial](https://keras.rstudio.com) * [New Resources for Deep Learning with the Neuromation Platform](https://medium.com/neuromation-io-blog/new-resources-for-deep-learning-with-the-neuromation-platform-55fd411cb440) * [Word2Vec Tutorial](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) * [Серия статей про ембединги текста](http://ruder.io/word-embeddings-1/) * [ImageNet Classification with Deep Convolutional Neural Networks - Colyer](https://blog.acolyer.org/2016/04/20/imagenet-classification-with-deep-convolutional-neural-networks/) * [Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition](https://blog.acolyer.org/2016/04/19/context-dependent-pre-trained-deep-neural-networks-for-large-vocabulary-speech-recognition/) * https://habrahabr.ru/post/352632/ Истинная реализация нейросети с нуля. * [Functional Programming for Deep Learning](https://www.notion.so/metya/5f25295584414592a3581836625b77d3#d5f53eac3e7146eeba6bf6365449600a) * Все отсюда! Прекрасный блог про понимание базовых дип лернингов [colah.github.io](http://colah.github.io/archive.html) * Например вот это - [Understanding LSTM Networks](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) * [Manning And Le Cun talks about Innate Prior Chomsky](http://www.abigailsee.com/2018/02/21/deep-learning-structure-and-innate-priors.html) * http://karpathy.github.io/2015/05/21/rnn-effectiveness/ * [Ассоциативные правила, или пиво с подгузниками](https://habrahabr.ru/company/ods/blog/353502/) * [Connection between absract algebra and high school algebra](https://blogs.ams.org/matheducation/2015/12/10/connections-between-abstract-algebra-and-high-school-algebra-a-few-connections-worth-exploring/) * [Instance Embedding: Segmentation Without Proposals](https://medium.com/@barvinograd1/instance-embedding-instance-segmentation-without-proposals-31946a7c53e1) * [Обзор топологий глубоких сверточных сетей](https://habrahabr.ru/company/mailru/blog/311706/) * [Generative Adversarial Nets and Variational Autoencoders at ICML 2018](https://medium.com/peltarion/generative-adversarial-nets-and-variational-autoencoders-at-icml-2018-6878416ebf22) * [Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification](https://www.nature.com/articles/s41598-018-30619-y) * [CERN Project Sees Orders-of-Magnitude Speedup with AI Approach](https://www.hpcwire.com/2018/08/14/cern-incorporates-ai-into-physics-based-simulations/) * [***Large-Scale Study of Curiosity-Driven Learning.***](https://pathak22.github.io/large-scale-curiosity/) * [Building a text classification model with TensorFlow Hub and Estimators](https://medium.com/tensorflow/building-a-text-classification-model-with-tensorflow-hub-and-estimators-3169e7aa568) * [Moving Beyond Translation with the Universal Transformer.](https://ai.googleblog.com/2018/08/moving-beyond-translation-with.html) * [Explaining Black-Box Machine Learning Models - Code Part 1: tabular data + caret + iml](https://shirinsplayground.netlify.com/2018/07/explaining_ml_models_code_caret_iml/) * [Keras DNN Part 2](https://shirinsplayground.netlify.com/2018/06/keras_fruits_lime/) * [Boosting Part 3](https://shirinsplayground.netlify.com/2018/07/explaining_ml_models_code_text_lime/) * [Recent Advances for a Better Understanding of Deep Learning − Part I](https://towardsdatascience.com/recent-advances-for-a-better-understanding-of-deep-learning-part-i-5ce34d1cc914) * [What is a Generative Adversarial Network? ](http://hunterheidenreich.com/blog/what-is-a-gan/) * [Think Julia: How to Think Like a Computer Scientist](https://benlauwens.github.io/ThinkJulia.jl/latest/book.html) ## Штуки, что я прочитал * [Про преобразование фурье](https://habrahabr.ru/post/196374/) * [Как предсказывают погоду](https://vas3k.ru/blog/how_to_weather/) * [Генерация стихов нейросетями](https://vas3k.ru/blog/394/) * [Blockchain]() * [Ehtereum](https://vas3k.ru/blog/ethereum/) * [Автоэнкодеры в Keras](https://habrahabr.ru/post/331382/) * [Разброс и смещение Дяконова](https://alexanderdyakonov.wordpress.com/2018/04/25/%D1%81%D0%BC%D0%B5%D1%89%D0%B5%D0%BD%D0%B8%D0%B5-bias-%D0%B8-%D1%80%D0%B0%D0%B7%D0%B1%D1%80%D0%BE%D1%81-variance-%D0%BC%D0%BE%D0%B4%D0%B5%D0%BB%D0%B8-%D0%B0%D0%BB%D0%B3%D0%BE%D1%80%D0%B8%D1%82/) * [Распонзнавание сцен и достопримечательностей](https://habr.com/company/jugru/blog/419501/) * [Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples](https://blog.acolyer.org/2018/08/15/obfuscated-gradients-give-a-false-sense-of-security-circumventing-defenses-to-adversarial-examples/) * [When DNNs go wrong – adversarial examples and what we can learn from them](https://blog.acolyer.org/2017/02/28/when-dnns-go-wrong-adversarial-examples-and-what-we-can-learn-from-them/) * [Understanding, generalisation, and transfer learning in deep neural networks](https://blog.acolyer.org/2017/02/27/understanding-generalisation-and-transfer-learning-in-deep-neural-networks/) * [Universal adversarial perturbations](https://blog.acolyer.org/2017/09/12/universal-adversarial-perturbations/) * [Delayed impact of fair machine learning](https://blog.acolyer.org/2018/08/13/delayed-impact-of-fair-machine-learning/) * [Почему хватит считать нейронные сети черным ящиком?](https://habr.com/post/420381/) * [Ultimate guide to handle Big Datasets for Machine Learning using Dask (in Python)](https://www.analyticsvidhya.com/blog/2018/08/dask-big-datasets-machine_learning-python/) * [OpenCV People Counter](https://www.pyimagesearch.com/2018/08/13/opencv-people-counter/) * [Ложь, наглая ложь и причинный вывод (causal inference)](https://ailev.livejournal.com/1435703.html) * [pandas on ray early lessons](https://rise.cs.berkeley.edu/blog/pandas-on-ray-early-lessons/) * [Red Flags In DS interview](http://hookedondata.org/Red-Flags-in-Data-Science-Interviews/) * [Classifying physical activity from smartphone data (Keras and R)](http://blogs.rstudio.com/tensorflow/posts/2018-07-17-activity-detection/) * [Keras for R](http://blogs.rstudio.com/tensorflow/posts/2017-09-06-keras-for-r/) * [Simple audio classification in keras in R](http://blogs.rstudio.com/tensorflow/posts/2018-06-06-simple-audio-classification-keras/) * [Пицца аля-semi-supervised](https://habr.com/company/ods/blog/422873/) * [Определение цвета автомобилей с использованием нейронных сетей и TensorFlow](https://habr.com/company/intel/blog/422689/) * [Yes, you should understand backprop. A. Karpaty](https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b) * * ----- ## Штуки, что я написал, перевел * [Применяем Deep Watershed Transform в соревновании Kaggle Data Science Bowl 2018](https://habrahabr.ru/post/354040/) * [Из спутниковых снимков в графы (cоревнование SpaceNet Road Detector) — попадание топ-10 и код ](https://habrahabr.ru/post/349068/) * [Соревнование Pri-matrix Factorization на DrivenData с 1ТБ данных — как мы заняли 3 место](https://habrahabr.ru/post/348540/) # Видео ## Видео, что мне надо посмотреть * [Essense of Linear Azlgebra](https://www.youtube.com/watch?v=kjBOesZCoqc&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab) * [PyData Meetup (TensorFlow Architecture)](https://www.youtube.com/watch?v=aoin1nl_eSA&feature=youtu.be&t=5810) [Materials](https://github.com/yurijvolkov/pydata_examples) * [Kaggle Mercedes Benz: предсказание времени тестирования автомобилей ](https://www.youtube.com/watch?v=HT3QpRp2ewA) * [Эффективные модели ближайших соседей](https://www.youtube.com/watch?v=UUm4MOyVTnE) * [Lisa Feldman: Emotions and brain](https://www.youtube.com/watch?v=h7Mtwds0wW4&feature=youtu.be) * [Manning And Le Cun talks about Innate Prior Chomsky](https://youtu.be/fKk9KhGRBdI) * [Attention is all you need by Ilya Polosuhin](https://www.youtube.com/watch?v=I0nX4HDmXKc) * [Simon says LSTM](https://www.youtube.com/watch?v=wYI7RZz4Rz0) * [Интервью с Виктором Рогуленко](http://youtube.com/watch?v=ymSqI0hVj-Q) ## Видео, что я посмотрел * [NLP натекин](https://www.youtube.com/watch?v=Ozm0bEi5KaI) * [Bias in an Artificial Neural Network explained | How bias impacts training](https://www.youtube.com/watch?v=HetFihsXSys) * [Keras init bias](https://www.youtube.com/watch?v=zralyi2Ft20) * [Генератор текста цепями маркова](https://tproger.ru/translations/markov-chains/) * [Ethereum work like](https://www.youtube.com/watch?v=a-Azm3nEuUI) * [Dstl Safe Passage: детекция и классификация траспортных средств — Владимир Игловиков](https://www.youtube.com/watch?v=NV9LSUIVkWA&feature=youtu.be&t=1247) * [Анализ больших данных в физике элементарных частиц](https://www.youtube.com/watch?v=SgI8S8ltBKc&feature=youtu.be) * [Large-Scale Study of Curiosity-Driven Learning](https://youtu.be/l1FqtAHfJLI) * [Подтипирование в Julia: рациональная реконструкция](https://www.youtube.com/watch?v=nnOJfPIrFdM) * [Semantic Folding: a new model for intelligent text processing](https://www.youtube.com/watch?v=HLuRQKzYbb8&feature=youtu.be) * [Применение карты Кохонена для классификации](https://www.youtube.com/watch?v=5FiH88Rs8Hc) * [Lambda Calculus](https://youtu.be/eis11j_iGMs) * [Essentials: Functional Programming's Y Combinator](https://www.youtube.com/watch?v=9T8A89jgeTI) * [Illustrated Guide to Recurrent Neural Networks](https://youtu.be/LHXXI4-IEns) * [illustrated guide to LSTM's and GRU's](https://www.youtube.com/watch?v=8HyCNIVRbSU) * [Visual Rhythm Beat](https://www.youtube.com/watch?v=K3z68mOLbNo&feature=youtu.be) * [Deep Learning, Structure and Innate Priors](https://youtu.be/fKk9KhGRBdI) * ## Работы, что я прочитал * [Deep Learning Based Solar Flare Forecasting Model. I. Results for Line-of-sight Magnetograms et al 2018](http://iopscience.iop.org/article/10.3847/1538-4357/aaae00)