Home     MCADS Lab     Research     Publications     Activities     Codes     Data     Teaching


CS 6140: Machine Learning


GENERAL INFORMATION

  • Instructor: Prof. Ehsan Elhamifar
  • Office Hours: Thursdays 2:30pm—3:30pm, 310E WVH
  • Class: Mondays and Thursdays 11:45—13:25, Forsyth Building 130
  • TA: Rui Dong (Office Hours: Mondays, 16:00-17:00, 362 WVH)
  • TA: Haiyi Mao (Office Hours: Tuesdays, 16:00-17:00, 362 WVH)
  • Discussions, Lectures, Homeworks on Piazza
  • DESCRIPTION

    This course covers practical algorithms and the theory for machine learning from a variety of perspectives. Topics include supervised learning (generative/discriminative learning, parametric/non-parametric learning, deep neural networks, support vector machines), unsupervised learning (clustering, dimensionality reduction, kernel methods) and learning theory (bias/variance tradeoffs, VC theory, large margins). The course will also discuss recent applications of machine learning, such as computer vision, data mining, natural language processing, speech recognition and robotics.

    PREREQUISITES

    Introduction to Probability and Statistics, Linear Algebra, Algorithms.

    SYLLABUS
    1. Supervised Learning

      • Linear regression, overfitting, regularization, sparsity

      • Bias-variance tradeoff

      • Logistic regression

      • Naive Bayes

      • Decision trees and instance-based learning

      • Ensemble methods: boosting and bagging

      • Perceptron

      • Neural networks and deep learning: DNNs, RNNs, CNNs

      • SVM and kernels

      • Sample complexity, PAC and VC dimension

      • Graphical models: directed and undirected, HMMs, Dynamical Systems

    2. Unsupervised Learning

      • Clustering: k-means, mixture models, expectation maximization, k-medoids, spectral clustering

      • Dimensionality reduction: PCA, Kernel PCA, Factor Analysis, ICA, CCA, MDS

      • Matrix and tensor factorization

      • Topic modeling

      • Model selection

    GRADING

    Homeworks and projects are due at the beginning of the class on the specified dates. No late homeworks or projects will be accepted.

    • Homeworks (50%)

    • Exam (25%)

    • Final project (25%)

    TEXTBOOKS

    • [KM] Kevin P. Murphy, Machine Learning: A Probabilistic Perspective. [Required]

    • [CB] Christopher Bishop, Pattern recognition and machine learning. [Optional]

    • [KF] Daphne Koller and Nir Friedman, Probabilistic Graphical Models. [Optional]

    READINGS

      Lecture 1: Introduction to ML, Linear Regression

      • Chapter 1 and 2 from KM book.

      Lecture 2: Linear Regression, Robust Regression, Overfitting, Regularization

      • Chapter 7 from KM book.

      Lecture 3: Point Estimation, Maximum Likelihood Estimation, MAP Estimation

      • Chapter 3 from KM book.

      Lecture 4: Bayesian Learning, Generative Modeling for Classification

      • Chapter 3 from KM book.

      Lecture 5: Generative Modeling for Classification: Naive Bayes, Gaussian Discriminant Analysis

      • Chapters 3 and 4 from KM book.

      Lecture 6: Discriminative Modeling for Classification: Logistic Regression, Softmax Regression

      • Chapter 8 from KM book.

      Lecture 7: Perceptron Algorithm, Functional and Geometric Margins, Support Vector Machines (SVM)

      • Chapters 8 and 14 from KM book.

      Lecture 8: Support Vector Machines (SVM), Max-Margin Classification, Lagrange Duality, KKT Conditions

      • Chapter 14 from KM book.

      Lecture 9: Kernels, Kernel SVM, Soft-Margin SVM, SMO Algorithm, Multi-Class SVM

      • Chapter 14 from KM book.

      Lecture 10: Ensemble Methods, Bagging, Boosting

      • Chapter 16 from KM book.

      Lecture 11: Neural Networks, Architectures, Activations, Outputs, Forward Propagation

      • See piazza for reading materials.

      Lecture 12: Feed Forward NNs, Forward and Backward Propagation, Training via Backpropagation Algorithm

      • See piazza for reading materials.

      Lecture 13: Dimensionality Reduction, PCA, Kernel PCA

      • Chapter 12 and 14 from KM book.

      Lecture 14: Dimensionality Reduction via NNs and Autoencoders, Sparsity, Training Autoencoders

      • See piazza for reading materials.

      Lecture 15: Convolutional Neural Networks, Architectures, Training, Examples

      • See piazza for reading materials.

      Lecture 16: Recurrent Neural Networks, LSTMs, Architectures, Training, Examples, Visualization

      • See piazza for reading materials.

      Lecture 17: Centroid Clustering via KMeans, Subspace Clustering via KSubspaces

      • See piazza for reading materials.

      Lecture 18: Similarity Graphs, Graph Laplacian and its Properties, Spectral Clustering

      • Chapter 25 from KM book.

      Lecture 19: Spectral Clustering, Graph-Cuts

      • Chapter 25 and 22 from KM book.

      Lecture 20: Latent Variable Models, EM Algorithm, Mixture of Gaussians

      • Chapter 11 from KM book.

      Lecture 21: Sequential Data Modeling, Markov Models and Hidden Markov Models

      • Chapter 10 and 17 from KM book.

    ADDITIONAL RESOURCES

    ETHICS

    All students in the course are subject to the Northeastern University's Academic Integrity Policy. Any submitted report/homework/project by a student in this course for academic credit should be the student's own work. Collaborations are only allowed if explicitly permitted. Per CCIS policy, violations of the rules, including cheating, fabrication and plagiarism, will be reported to the Office of Student Conduct and Conflict Resolution (OSCCR). This may result in deferred suspension, suspension, or expulsion from the university.