ISYE 8813: Special Topics in Data Science
Time/place : TuTh 1:35-2:55, IC 107
Joint instructors: Arkadi Nemirovski and Jeff Wu
General teaching plan: This course is taught by two instructors in two separate but intellectually connected modules. Wu will teach the first half and Nemirovski the second half. Each part has its own course outline and follows a separate grading policy (see below). The final grade is determined by the following formula:
final grade = min([max(grade1,grade2),min(grade1,grade2)+1).
Thus the students must make satisfactory progress and work in each module in order to earn a respectable grade.
Part I: Computationally inspired statistical methods (by Wu).
Explanation: This module will cover some statistical methods that involve computation/optimization as a key component. My lectures will highlight the interface between the statistical idea and the computational method that carries out the idea. Prominent examples as follows (in random order).
- Maximum likelihood estimation, Nonlinear least squares regression: continuous optimization (e.g., Levenberg-Marquadt-Fletcher algorithm)
- Bayesian estimation and inference: Markov-Chain-Monte-Carlo method (MCMC) and related simulation-based optimization, reversible jump MCMC, Laplacian approximation, Joseph’s Do-It method.
- EM algorithm and extension: Wu’s original proof, more modern techniques in optimization.
- Discrete optimal design (as popularized by SAS/JMP): various algorithms developed by statisticians (e.g., exchange algorithm), generic discrete algorithms, ad hoc algorithms like “Particle swarm optimization (PSO)”.
- Space-filling designs for computer experiments: combinatorial construction, algorithmic construction,of non-rectangular and non-convex experimental regions.
- Computer experiments and uncertainty quantification (UQ): Kriging; computational techniques to circumvent the near singularity of the correlation matrix in high dimensions; computational techniques to handle qualitative factors.
- Variable selection in regression analysis: nn-garrotte, lasso, variable selection under heredity constraints, other non-convex penalties.
- Support vector machine.
- Bayesian global optimization: Expected improvement (EI) algorithm and its extensions, minimum energy designs, etc.
- Optimization in engineering statistics: robust parameter design, tolerance design, selective assembly, etc.
Prerequisite: Basic mathematical statistics, regression, statistical computing, in a master degree program in statistics or OR, etc.
Grade is assigned based on (i) attendance of and participation in classes, (ii) reports on 1-2 assigned papers which I will present in class, (iii) final exam consisting of an in-class presentation on a subject that is consistent with the theme of the module. If there are too many students in class, this presentation may be replaced by a written report by each student. No midterm. All the required work must be done by the last lecture of this module on Tu Feb 28. The second module will start on March 2.
Part II: Statistical Inferences via Convex Optimization (by Nemirovski)
Explanation: this module focuses on situations where optimization theory (theory, not algorithms!) seems to be of methodological value in Statistics, acting as the source of statistical inferences with provably optimal, or nearly so, performance. In this context, we focus on utilizing convex programming theory, due to its power and due to the desire to end up with inference routines reducing to solving convex optimization problems and thus implementable in a computationally efficient fashion. The topics include:
- Sparsity-oriented signal processing via ell_1 minimization
- Pairwise and multiple hypothesis testing via convex optimization, with applications to
- sequential hypothesis testing
- recovering linear and quadratic functions of signal from indirect observations,
- change point detection
- measurement design
- Near-optimal recovery of signals in indirect Gaussian observation scheme
Prerequisite: elementary Linear Algebra and Analysis and, most important, basic mathematical culture. No preliminary knowledge of Optimization and Statistics is necessary.
Grade is assigned based on final exam. No homework or midterm.
Detailed information on module’s contents can be found in self-contained Lecture Notes
- Juditsky, A. Nemirovski, Statistical Inferences via Convex Optimization
http://www2.isye.gatech.edu/~nemirovs/StatOpt_LN.pdf