Aller au menu Aller au contenu
Une voie, plusieurs choix
Informatique et Mathématiques appliquées
Une voie, plusieurs choix

> Formation > Cursus ingénieur

Machine Learning fundamentals - WMM9MO21

A+Augmenter la taille du texteA-Réduire la taille du texteImprimer le documentEnvoyer cette page par mail cet article Facebook Twitter Linked In
  • Volumes horaires

    • CM : 18.0
    • TD : -
    • TP : -
    • Projet : -
    • Stage : -
    • DS : -
    Crédits ECTS : 3.0
  • Responsables : Massih-Reza AMINI

Objectifs

The intent of this course is to propose a broad introduction to the field of Machine Learning, including discussions of each of the major frameworks, supervised, unsupervised and semi-supervised learning.

Contenu

Part I - Supervised Learning
This part gives an overview of foundations of supervised learning. We will see that learning is an inductive process where a general rule is to be found from a finite set of labeled observations by minimizing the empirical risk of the rule over that set. The study of consistency gives conditions that, in the limit of infinite sample sizes, the minimizer of the empirical risk will lead to a value of the risk that is as good as the best attainable risk. The direct minimization of the empirical risk is not tractable as the latter is not derivative, hence learning algorithms find the parameters of the learning rule by minimizing a convex upper-bound (or surrogate) of the empirical risk. We present, classical strategies for unconstrained convex optimization: gradient descente, Quasi-Newton approach, and conjugate gradient descente. We present classical learning algorithms for binary classification: the perceptron, logistic regression and boosting by linking the development of these models to the Empirical Risk Minimization framework as well as the Multi-class classification paradigm. Particularly, we present Multi-Layer Perceptron as well as the back-propagation algorithm that is in use in deep learning.

Part II - Unsupervised and semi-supervised Learning
We will present generative models for clustering as well as two powerful tools for parameter estimation namely Expectation-Maximization (EM) and Classification Expectation-Maximization (CEM) algorithms. In the context of Big Data, labeling observations for learning is a tedious task. Semiu-supervised paradigm aims at learining with few labeled and a huge amount of unlabeled data. In this part we review the three families of techniques proposed in semi-supervised learning, that is Graphical, Generative and Discriminant models.

Prérequis

Master M1 in computer science or statistics

Contrôles des connaissances

Reports: 30% of the mark
Final Examen: 70% of the mark

Reports: 30% of the mark
Final Examen: 70% of the mark

L'examen existe uniquement en anglais FR

Calendrier

Le cours est programmé dans ces filières :

  • Cursus ingénieur - Master 2 Informatique - Semestre 9 (ce cours est donné uniquement en anglais EN)
  • Cursus ingénieur - Master 2 Informatique - Semestre 9 (ce cours est donné uniquement en anglais EN)
  • Cursus ingénieur - Master 2 Math. et Applications - Semestre 9 (ce cours est donné uniquement en anglais EN)
  • Cursus ingénieur - Master 2 Informatique - Semestre 9 (ce cours est donné uniquement en anglais EN)
  • Cursus ingénieur - Master 2 Informatique - Semestre 9 (ce cours est donné uniquement en anglais EN)
cf. l'emploi du temps 2020/2021

Informations complémentaires

Code de l'enseignement : WMM9MO21
Langue(s) d'enseignement : FR

Vous pouvez retrouver ce cours dans la liste de tous les cours.

Bibliographie

[1] Massih-Reza Amini - Apprentissage Machine de la théorie à la pratique, Eyrolles, 2015.
[2] Christopher Bishop - Neural Networks for Pattern Recognition, Oxford University Press, 1995.
[3] Richard Duda, Peter Hart & David Strok - Pattern Classification, John Wiley & Sons, 1997.
[4] John Shawe-Taylor & Nello Cristianini - Kernel Methods for Pattern Analysis, Cambridge University Press, 2004.
[5] Colin McDiarmid - On the method of bounded differences,Surveys in Combinatorics, 141:148-188, 1989.
[6] Mehryar Mohri, Afshin Rostamzadeh & Ameet Talwalker - Foundations of Machine Learning, MIT Press, 2012.
[7] Bernhard Schölkopf & Alexander J. Smola - Learning with Kernels, MIT Press, 2002.
[8] Vladimir Kolchinskii - Rademacher penalties and structural risk minimization, IEEE Transactions on Information Theory, 47(5):1902–1914, 2001.

A+Augmenter la taille du texteA-Réduire la taille du texteImprimer le documentEnvoyer cette page par mail cet article Facebook Twitter Linked In

mise à jour le 15 janvier 2017

Université Grenoble Alpes