Most machine learning classes and textbooks mention that there is no universal supervised learning algorithm that can do reasonably well on all learning problems. Indeed, a series of “no free lunch theorems” state that even in a simple input space, for any learning algorithm, there always exists a bad conditional distribution of outputs given inputs…
Author: Francis Bach
I am writing a book!
After several attempts, I finally found the energy to start writing a book. It grew out of lecture notes for a graduate class I taught last semester. I make the draft available so that I can get feedback before a (hopefully) final effort next semester. The goal of the book is to present old and…
Going beyond least-squares – II : Self-concordant analysis for logistic regression
Last month, we saw that self-concordance is a key property in optimization, to use local quadratic approximations in the sharpest possible way. In particular it was an affine-invariant quantity leading to a simple and elegant analysis of Newton method. The key assumption was a link between third and second-order derivatives, which took the following form…
Going beyond least-squares – I : self-concordant analysis of Newton method
Least-squares is a workhorse of optimization, machine learning, statistics, signal processing, and many other scientific fields. I find it particularly appealing (too much, according to some of my students and colleagues…), because all algorithms, such as stochastic gradient [1], and analyses, such as for kernel ridge regression [2], are much simpler and rely on reasonably…
Finding global minima with kernel approximations
Last month, I showed how global optimization based only on accessing function values can be hard with no convexity assumption. In a nutshell, with limited smoothness, the number of function evaluations has to grow exponentially fast in dimension, which is a rather negative statement. On the positive side, this number does not grow as fast…
Optimization is as hard as approximation
Optimization is a key tool in machine learning, where the goal is to achieve the best possible objective function value in a minimum amount of time. Obtaining any form of global guarantees can usually be done with convex objective functions, or with special cases such as risk minimization with one-hidden over-parameterized layer neural networks (see…
The Cauchy residue trick: spectral analysis made “easy”
In many areas of machine learning, statistics and signal processing, eigenvalue decompositions are commonly used, e.g., in principal component analysis, spectral clustering, convergence analysis of Markov chains, convergence analysis of optimization algorithms, low-rank inducing regularizers, community detection, seriation, etc. Understanding how the spectral decomposition of a matrix changes as a function of a matrix is…
Polynomial magic III : Hermite polynomials
After two blog posts earlier this year on Chebyshev and Jacobi polynomials, I am coming back to orthogonal polynomials, with Hermite polynomials. This time, in terms of applications to machine learning, no acceleration, but some interesting closed-form expansions in positive-definite kernel methods. Definition and first properties There are many equivalent ways to define Hermite polynomials….
The many faces of integration by parts – II : Randomized smoothing and score functions
This month I will follow-up on last month blog post and look at another application of integration by parts, which is central to many interesting algorithms in machine learning, optimization and statistics. In this post, I will consider extensions in higher dimensions, where we take integrals on a subset of \(\mathbb{R}^d\), and focus primarily on…
The many faces of integration by parts – I : Abel transformation
Integration by parts is a highlight of any calculus class. It leads to multiple classical applications for integration of logarithms, exponentials, etc., and it is the source of an infinite number of exercises and applications to special functions. In this post, I will look at a classical discrete extension that is useful in machine learning…