Machine Learning Research Blog

Francis Bach

Menu
  • Home
  • About
  • Home page
Menu

Unraveling spectral properties of kernel matrices – II

Posted on March 24, 2025March 24, 2025 by Francis Bach

This month, we pursue our exploration of spectral properties of kernel matrices. As mentioned in a previous post, understanding how eigenvalues decay is not only fun but also key to understanding algorithmic and statistical properties of many learning methods (see, e.g., chapter 7 of my book “Learning Theory from First Principles“). This month, we look…

Read more

My book is (at last) out!

Posted on December 21, 2024 by Francis Bach

Just in time for Christmas, I received two days ago the first hard copies of my book! It is a mix of feelings of relief and pride after 3 years of work. As most book writers will probably acknowledge, it took much longer than I expected when I started, but overall it was an enriching…

Read more

Scaling laws of optimization

Posted on October 5, 2024October 21, 2024 by Francis Bach

Scaling laws have been one of the key achievements of theoretical analysis in various fields of applied mathematics and computer science, answering the following key question: How fast does my method or my algorithm converge as a function of (potentially partially) observable problem parameters. For supervised machine learning and statistics, probably the simplest and oldest…

Read more

Unraveling spectral properties of kernel matrices – I

Posted on January 7, 2024January 12, 2024 by Francis Bach

Since my early PhD years, I have plotted and studied eigenvalues of kernel matrices. In the simplest setting, take independent and identically distributed (i.i.d.) data, such as in the cube below in 2 dimensions, take your favorite kernels, such as the Gaussian or Abel kernels, plot eigenvalues in decreasing order, and see what happens. The…

Read more

Revisiting the classics: Jensen’s inequality

Posted on March 13, 2023March 15, 2023 by Francis Bach

There are a few mathematical results that any researcher in applied mathematics uses on a daily basis. One of them is Jensen’s inequality, which allows bounding expectations of functions of random variables. This really happens a lot in any probabilistic arguments but also as a tool to generate inequalities and optimization algorithms. In this blog…

Read more

Non-convex quadratic optimization problems

Posted on February 2, 2023March 14, 2023 by Francis Bach

Among continuous optimization problems, convex problems (with convex objectives and convex constraints) define a class that can be solved efficiently with a variety of algorithms and with arbitrary precision. This is not true more generally when the convexity assumption is removed (see this post). This of course does not mean that (1) nobody should attempt…

Read more

Discrete, continuous and continuized accelerations

Posted on December 15, 2022 by Raphael Berthier

In optimization, acceleration is the art of modifying an algorithm in order to obtain faster convergence. Building accelerations and explaining their performance have been the subject of a countless number of publications, see [2] for a review. In this blog post, we give a vignette of these discussions on a minimal but challenging example, Nesterov…

Read more

Sums-of-squares for dummies: a view from the Fourier domain

Posted on November 16, 2022November 16, 2022 by Francis Bach

In these last two years, I have been studying intensively sum-of-squares relaxations for optimization, learning a lot from many great research papers [1, 2], review papers [3], books [4, 5, 6, 7, 8], and even websites. Much of the literature focuses on polynomials as the de facto starting point. While this leads to deep connections…

Read more

Rethinking SGD’s noise – II: Implicit Bias

Posted on September 18, 2022September 22, 2022 by Loucas Pillaud-Vivien and Scott Pesme

In the previous post, we showed (or at least tried to!) how the inherent noise of the stochastic gradient descent algorithm (SGD), in the context of modern overparametrised architectures, is structured and carries two important features: (i) it vanishes for interpolating solutions and (ii) it belongs to a low-dimensional manifold spanned by the gradients. Building…

Read more

Rethinking SGD’s noise

Posted on July 25, 2022August 3, 2022 by Loucas Pillaud-Vivien

It seemed a bit unfair to devote a blog to machine learning (ML) without talking about its current core algorithm: stochastic gradient descent (SGD). Indeed, SGD has become, year after year, the basic foundation of many algorithms used for large-scale ML problems. However, the history of stochastic approximation is much older than that of ML:…

Read more
  • 1
  • 2
  • 3
  • 4
  • Next

Recent Posts

  • Unraveling spectral properties of kernel matrices – II
  • My book is (at last) out!
  • Scaling laws of optimization
  • Unraveling spectral properties of kernel matrices – I
  • Revisiting the classics: Jensen’s inequality

About

I am Francis Bach, a researcher at INRIA in the Computer Science department of Ecole Normale Supérieure, in Paris, France. I have been working on machine learning since 2000, with a focus on algorithmic and theoretical contributions, in particular in optimization. All of my papers can be downloaded from my web page or my Google Scholar page. I also have a Twitter account.

Recent Posts

  • Unraveling spectral properties of kernel matrices – II
  • My book is (at last) out!
  • Scaling laws of optimization
  • Unraveling spectral properties of kernel matrices – I
  • Revisiting the classics: Jensen’s inequality

Recent Comments

  • Francis Bach on Unraveling spectral properties of kernel matrices – II
  • Chanwoo Chun on Unraveling spectral properties of kernel matrices – II
  • Antonio Horta Ribeiro on Unraveling spectral properties of kernel matrices – II
  • Francis Bach on My book is (at last) out!
  • Francis Bach on Unraveling spectral properties of kernel matrices – I

Archives

  • March 2025
  • December 2024
  • October 2024
  • January 2024
  • March 2023
  • February 2023
  • December 2022
  • November 2022
  • September 2022
  • July 2022
  • April 2022
  • March 2022
  • February 2022
  • July 2021
  • June 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019

Categories

  • Machine learning
  • Opinions
  • Optimization
  • Tools

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
©2025 Machine Learning Research Blog | WordPress Theme by Superbthemes.com