Thursday, June 8, 2017

Hyperparameter optimization and the analysis of boolean functions

I've always admired the beautiful theory on the analysis of boolean functions (see the excellent book of Ryan O'Donnell, as well Gil Kalai's lectures). Heck, it used to be my main area of investigation back in the early grad-school days we were studying hardness of approximation, the PCP theorem, and the "new" (back at the time) technology of Fourier analysis for boolean functions.  This technology also gave rise to seminal results on learnability with respect to the uniform distribution. Uniform distribution learning has been widely criticized as unrealistic, and the focus of the theoretical machine learning community has shifted to (mostly) convex, real-valued, agnostic learning in recent years.

It is thus particularly exciting to me that some of the amazing work on boolean learning can be used for the very practical problem of Hyperparameter Optimization (HO), which has on the face of it nothing to do with the uniform distribution. Here is the draft, joint work with Adam Klivans and Yang Yuan.

To describe hyperparameter optimization: many software packages, in particular with the rise of deep learning, come with gazillions of input parameters. An example is the TensorFlow software package for training deep nets. To actually use it the user needs to specify how many layers, which layers are convolutional / full connected, which optimization algorithm to use (stochastic gradient descent, AdaGrad, etc.), whether to add momentum to the optimization or not.... You get the picture - choosing the parameters is by itself an optimization problem!

It is a highly non-convex optimization problem, over mostly discrete (but some continuous) choices. Evaluating the function, i.e. training a deep net over a large dataset with a specific configuration, is very expensive, and you cannot assume any other information about the function such as gradients (that are not even defined for discrete functions).  In other words, sample complexity is of the essence, whereas computation complexity can be relaxed as long as no function evaluations are required.

The automatic choice of hyperparameters has been hyped recently with various names such as "auto tuning", "auto ML",  and so forth. For an excellent post on existing approaches and their downsides see these posts Ben Recht's blog.

This is where harmonic analysis and compressed sensing can be of help! While in general hyperparameter optimization is computationally hard, we assume a sparse Fourier representation of the objective function. Under this assumption, one can use compressed sensing techniques to recover and optimize the underlying function with low sample complexity.

Surprisingly, using sparse recovery techniques one can even improve the known sample complexity results for well-studied problems such as learning decision trees under the uniform distribution, improving upon classical results by Linial, Mansour and Nisan.

Experiments show that the sparsity assumptions in the Fourier domain are justified for certain hyperparameter optimization settings, in particular, when training of deep nets for vision tasks. The harmonic approach (we call the algorithm "Harmonica") attains a better solution than existing approaches based on multi-armed bandits and Bayesian optimization, and perhaps more importantly, significantly faster. It also beats random search 5X (i.e. random sampling with 5 times as many samples allowed as for our own method, a smart benchmark proposed by Jamieson).

Those of you interested in code, my collaborator Yang Yuan has promised to release it on GitHub, so please go ahead and email him ;-)


PS. We're grateful to Sanjeev Arora for helpful discussions on this topic.

PPS. It has been a long time since my last post, and I still owe the readers an exposition on the data compression approach to unsupervised learning.  In the mean time, you may want to read this paper with Tengyu Ma in NIPS 2016.

Saturday, October 29, 2016

Unsupervised Learning II: the power of improper convex relaxations

In this continuation post I'd like to bring out a critical component of learning theory that is essentially missing in today's approaches for unsupervised learning: improper learning by convex relaxations.

Consider the task of learning a sparse linear separator. The sparsity, or $\ell_0$, constraint is non-convex and computationally hard. Here comes the Lasso - replace the $\ell_0$ constraint with $\ell_1$ convex relaxation --- et voila --- you've enabled polynomial-time solvable convex optimization.

Another more timely example: latent variable models for recommender systems, a.k.a. the matrix completion problem. Think of a huge matrix, with one dimension corresponding to users and the other corresponding to media items (e.g. movies as in the Netflix problem). Given a set of entries in the matrix, corresponding to ranking of movies that the users already gave feedback on, the problem is to complete the entire matrix and figure out the preferences of people on movies they haven't seen.
This is of course ill posed without further assumptions. The low-rank assumption intuitively states that people's preferences are governed by few factors (say genre, director, etc.).  This corresponds to the user-movie matrix having low algebraic rank.

Completing a low-rank matrix is NP-hard in general. However, stemming from the compressed-sensing literature, the statistical-reconstruction approach is to assume additional statistical and structural properties over the user-movie matrix. For example, if this matrix is "incoherent", and the observations of the entires are obtained via a uniform distribution, then this paper by Emmanuel Candes and Ben Recht shows efficient reconstruction is still possible.

But is there an alternative to incoherent assumptions such as incoherence and i.i.d. uniform observations?

A long line of research has taken the "improper learning by convex relaxation approach" and applied it to matrix completion in works such as Srebro and Shraibman, considering convex relaxations to rank such as the trace norm and the max norm. Finally, in  joint work with Satyen Kale and Shai Shalev-Shwartz, we take this approach to the extreme by not requiring any distribution at all, giving regret bounds in the online convex optimization model (see previous post).


By the exposition above one may think that this technique of improper convex relaxations applies only to problems whose hardness comes from "sparsity".  This is far from the truth, and in the very same paper referenced above, we show how the technique can be applied to combinatorial problems such as predicting cuts in graphs, and outcomes of matches in tournaments.

In fact, improper learning is such a powerful concept, that problems such that the complexity of problems such as learning DNF formulas has remained open for quite a long time, despite the fact that showing proper learnability was long known to be NP-hard.

On the other hand, improper convex relaxation is not an all powerful magic pill. When designing convex relaxations to learning problems, special care is required so not to increase sample complexity. Consider for example the question of predicting tournaments, as given in this COLT open problem formulation by Jake Abernethy. The problem, loosely stated, is to iteratively predict the outcome of a match between two competing teams from a league of N teams. The goal is to compete, or make as few mistakes, as the best ranking of teams in hindsight. Since the number of rankings scales as $N!$, the multiplicative weights update method can be used to guarantee regret that scales as $\sqrt{T \log N!} = O(\sqrt{T N \log N})$. However, the latter, naively implemented, runs in time proportional to $N!$. Is there an efficient algorithm for the problem attaining similar regret bounds?

A naive improper learning relaxation would treat each pair of competing teams as an instance of binary prediction, for which we have efficient algorithms. The resulting regret bound, however, would scale as the number of pairs of teams over $N$ candidate teams, or as $\sqrt{T N^2}$, essentially removing any meaningful prediction property. What has gone wrong?

By removing the restriction of focus from rankings to pairs of teams, we have enlarged the decision set significantly, and increased the number of examples needed to learn. This is a general and important concern for improper convex relaxations: one needs to relax the problem in such a way that sample complexity (usually measured in terms of Rademacher complexity) doesn't explode. For the aforementioned problem of predicting tournaments, a convex relaxation that preserves sample complexity up to logarithmic factors is indeed possible, and described in the same paper.

In the coming posts we'll describe a framework for unsupervised learning that allows us to use the power of improper convex relaxations.





Thursday, October 6, 2016

A mathematical definition of unsupervised learning?

Extracting structure from data is considered by many to be the frontier of machine learning. Yet even defining "structure", or the very goal of learning structure from unlabelled data, is not well understood or rigorously defined.

In this post we'll give a little background to unsupervised learning and argue that, compared to supervised learning, we lack a well-founded theory to tackle even the most basic questions. In the next post, we'll introduce a new candidate framework.

Unsupervised learning is a commonplace component of many undergraduate machine learning courses and books. At the core, the treatment of unsupervised learning is a collection of methods for analyzing data and extracting hidden structure from this data.  Take for example John Duchi's "Machine Learning" course at Stanford. Under "Unsupervised learning", we have the topics: Clustering, K-means, EM. Mixture of Gaussians, Factor analysis, PCA (Principal components analysis), ICA (Independent components analysis). This is an excellent representation of the current state-of-the-art:


  • Clustering and K-means:  by grouping "similar" elements together, usually according to some underlying metric, we can create artificial "labels" for data elements in hope that this rough labelling will be of future use, either in a supervised learning task on the same dataset, or in "understanding" the data. K-means is a widely used algorithm for finding k representative centers of the data, allowing for k "labels".
  • Mixture of Gaussians: an exemplary and widely used method stemming from the generative approach to unsupervised learning. This approach stipulates that the data is generated from a distribution, in this case a mixture of normal random variables. There are numerous algorithms for finding the centers of these Gaussians, thereby learning the structure of the data.
  • Factor analysis, PCA and ICA: the spectral approach to unsupervised learning is perhaps the most widely used, in which the covariance matrix of the data is approximated by the largest few singular vectors. This type of clustering is widely successful in text (word embeddings) and many other forms of data. 



The above techniques are widely used and successful in practice, and under suitable conditions polynomial-time algorithms are known. The common thread amongst them is finding structure in the data, which turns out for many practical applications to be useful in a variety of supervised learning tasks.

Notable unsupervised learning techniques that are missing from the above list are Restricted Boltzman machines (RBMs), Autoencoders and Dictionary Learning (which was conceived in the context of deep neural networks). The latter techniques attempt to find a succinct representation of the data. Various algorithms and heuristics for RBMs, Autotencoders and Dictionary Learning exist, but these satisfy at least one of the following:

1) come without rigorous performance guarantees.

2) run in exponential time in the worst case.

3) assume strong generative assumptions about the input.

Recent breakthroughs in polynomial time unsupervised learning due to Sanjeev and his group address points (1) & (2) above and require only (3). Independently the same is also achievable by the method of moments, see e.g. this paper, originally from Sham's group @ MSR New England, and many more recent advances. What's the downside of this approach? The Arora-Hazan debate over this point, which the theory-ML students are exposed to in our weekly seminar, is subject for a longer post...

What is missing then? Compare the above syllabus with that of supervised learning, in most undergrad courses and books, the difference in terms of theoretical foundations is stark. For supervised learning  we have Vapnik's statistical learning theory -  a deep and elegant mathematical theory that classifies exactly the learnable from labeled real-valued examples. Valiant's computational learning theory adds the crucial element of computation, and over the combined result we have hundreds of scholarly articles describing in detail problems that are learnable efficiently / learnable but inefficiently /  improperly learnable / various other notions.

Is there meaning, in pure mathematical sense such as Vapnik and Valiant instilled into supervised learning, to the unsupervised world?

I like to point out in my ML courses that computational learning theory say something philosophical about science: the concept of "learning", or a "theory" such as a physical theory of the world, has a precise mathematical meaning independent of humans. While many of us scientists seek a "meaningful" theory in the human sense, there need not be one! It could very well be that a physical theory, for example that of condensed matter, has a "theory" in the Vapnik-Valiant sense, but not one that would be as "meaningful" and "elegant" in the Feynman sense.


How can we then, give mathematical meaning to unsupervised learning in a way that:

  1. Allows natural notions of generalization from examples
  2. Improves future supervised learning tasks regardless of their nature
  3. Allows for natural family of algorithms (hopefully but not necessarily - based on convex optimization)


This will be the starting point for our next post...



Sunday, July 31, 2016

Faster Than SGD 1: Variance Reduction

SGD is well-known for large-scale optimization. In my mind, there are so-far two fundamentally different improvements since the original introduction of SGD: (1) variance reduction, and (2) acceleration. In this post I'd love to conduct a survey regarding (1), and I'd like to especially thank those ICML'16 participants who pushed me to write this post.

Consider the famous composite convex minimization problem
\begin{equation}\label{eqn:the-problem}
\min_{x\in \mathbb{R}^d} \Big\{ F(x) := f(x) + \psi(x) := \frac{1}{n}\sum_{i=1}^n f_i(x) + \psi(x) \Big\} \enspace, \tag{1}
\end{equation}
in which $f(x) = \frac{1}{n}\sum_{i=1}^n f_i(x)$ is a finite average of $n$ functions, and $\psi(x)$ is simple "proximal" function such as the $\ell_1$ or $\ell_2$ norm. In this finite-sum form, each function $f_i(x)$ usually represents the loss function with respect to the $i$-th data vector. Problem \ref{eqn:the-problem} arises in many places:
  • convex classification and regression problems (e.g. Lasso, SVM, Logistic Regression) fall into \ref{eqn:the-problem}.
  • some notable non-convex problems including PCA, SVD, CCA can be reduced to \ref{eqn:the-problem}.
  • the neural net objective can be written in \ref{eqn:the-problem} as well although the function $F(x)$ becomes non-convex; in any case, methods solving convex versions of \ref{eqn:the-problem} sometimes do generalize to non-convex settings.

Recall: Stochastic Gradient Descent (SGD)

To minimize objective $F(x)$, stochastic gradient methods iteratively perform the following update
$$x_{k+1} \gets \mathrm{argmin}_{y\in \mathbb{R}^d} \Big\{ \frac{1}{2 \eta } \|y-x_k\|_2^2 + \langle \tilde{\nabla}_k, y \rangle + \psi(y) \Big\} \enspace,$$
where $\eta$ is the step length and $\tilde{\nabla}_k$ is a random vector satisfying $\mathbb{E}[\tilde{\nabla}_k] = \nabla f(x_k)$ and is referred to as the gradient estimator. If the proximal function $\psi(y)$ equals zero, the update reduces to $x_{k+1} \gets x_k - \eta \tilde{\nabla}_k$.
A popular choice for the gradient estimator is $\tilde{\nabla}_k = \nabla f_i(x_k)$ for some random index $i \in [n]$ per iteration, and methods based on this choice are known as stochastic gradient descent (SGD). Since computing $\nabla f_i(x)$ is usually $n$ times faster than that of $\nabla f(x)$, SGD enjoys a low per-iteration cost as compared to full-gradient methods; however, SGD cannot converge at a rate faster than $1/\varepsilon$ even if $F(\cdot)$ is very nice.

Key Idea: Variance Reduction Gives Faster SGD

The theory of variance reduction states that, SGD can converge much faster if one makes a better choice of the gradient estimator $\tilde{\nabla}_k$, so that its variance "reduces as $k$ increases". Of course, such a better choice must have (asymptotically) the same per-iteration cost as compared with SGD.

There are two fundamentally different ways to choose a better gradient estimator, the first one is known as SVRG, and the second one is known as SAGA (which is built on top of SAG). Both of them require each function $f_i(x)$ to be smooth, but such a requirement is not intrinsic and can be somehow removed.

  • Choice 1: the SVRG estimator (my favorite)
    Keep a snapshot vector $\tilde{x} = x_k$ every $m$ iterations (where $m$ is some parameter usually around $2n$), and compute the full gradient $\nabla f(\tilde{x})$ only for such snapshots. Then, set
    $$\tilde{\nabla}_k := \nabla f_i (x_k) - \nabla f_i(\tilde{x}) + \nabla f(\tilde{x})$$
    where $i$ is randomly chosen from $1,\dots,n$. The amortized cost of computing $\tilde{\nabla}_k$ is only 3/2 times bigger than SGD if $m=2n$ and if we store $\nabla f_i(\tilde{x})$ in memory.
  • Choice 2: the SAGA estimator.
    Store in memory $n$ vectors $\phi_1,\dots,\phi_n$ and set all of them to be zero at the beginning. Then, in each iteration $k$, set
    $$\tilde{\nabla}_k := \nabla f_i(x_k) - \nabla f_i(\phi_i) + \frac{1}{n} \sum_{j=1}^n \nabla f_j(\phi_j)$$
    where $i$ is randomly chosen from $1,\dots,n$. Then, very importantly, update $\phi_i \gets x_k$ for this $i$. If properly implemented, the per-iteration cost to compute $\tilde{\nabla}_k$ is the same as SGD.

How Variance is Reduced?

Both choices of gradient estimators ensure that the variance of $\tilde{\nabla}_k$ approaches to zero as $k$ grows. In a rough sense, both of them ensure that $\mathbb{E}[\|\tilde{\nabla}_k - \nabla f(x_k)\|^2] \leq O(f(x_k) - f(x^*))$ so the variance decreases as we approach to the minimizer $x^*$. The proof of this is two-lined if $\psi(x)=0$ and requires a little more effort in the general setting, see for instance Lemma A.2 of this paper

Using this key observation one can prove that, if $F(x)$ is $\sigma$-strongly convex and if each $f_i(x)$ is $L$-smooth, then the "gradient complexity" (i.e., # of computations of $\nabla f_i (x)$) of variance-reduced SGD methods to minimize Problem \ref{eqn:the-problem} is only $O\big( \big(n + \frac{L}{\sigma} \big) \log \frac{1}{\varepsilon}\big)$. This is much faster than the original SGD method.

Is Variance Reduction Significant?

Short answer: NO when first introduced, but becoming YES, YES and YES.

Arguably the original purpose of variance reduction is to make SGD run faster on convex classification / regression problems. However, variance-reduction methods cannot beat the slightly-earlier introduced coordinate-descent method SDCA, and performs worse than its accelerated variant AccSDCA (see for instance the comparison here).

Then why is variance reduction useful at all? The answer is on the generality of Problem \eqref{eqn:the-problem}. In all classification and regression problems, each $f_i(x)$ is of a restricted form $loss(\langle a_i, x\rangle; b_i)$ where $a_i$ is the $i$-th data vector and $b_i$ is its label. However, Problem \eqref{eqn:the-problem} is a much bigger class, and each function $f_i(x)$ can encode a complicated structure of the learning problem. In the extreme case, $f_i(x)$ could encode a neural network where $x$ characterize the weights of the connections (so becoming nonconvex). For such general problems, SDCA does not work at all.

In sum, variance reduction methods, although converging in the same speed as SDCA, applies more widely.

The History of Variance Reduction

There are too many variance-reduction papers that even an expert sometime can't keep track of all of them. Below, let me point out some interesting papers that one should definitely cite:
  • The first variance-reduction method is SAG.
    However, SAG is not known to work in the full proximal setting and thus (in principle) does not apply to for instance Lasso or anything L1-regularized. I conjecture that SAG also works in the proximal setting, although some of my earlier experiments seem to suggest that SAG is outperformed by its gradient-unbiased version SAGA in the proximal setting.
  • SAGA is a simple unbiased fix of SAG, and gives a much simpler proof than SAG. In my experiments, SAGA seems performing never worse than SAG.
  • SVRG was actually discovered independently by two groups of authors, group 1 and group 2. Perhaps because there is no experiment, the first group's paper quickly got unnoticed (cited by 24) and the second one becomes very famous (cited by 200+). What a pity.
Because SAGA and SVRG are the popular choices, one may ask which one runs faster? My answer is 
  • It depends on the structure of the dataset: a corollary of this paper suggests that if the feature vectors are pairwisely close, then SVRG is better, and vice versa. 
  • Experiments seem to suggest that if all vectors are normalized to norm 1, then SVRG performs better.
  • If the objective is not strongly convex (such as Lasso), then a simple modification of SVRG outperforms both SVRG and SAGA. 
  • Also, when $f_i(x)$ is a general function, SAGA requires $O(nd)$ memory storage which could be too large to load into memory; SVRG only needs $O(d)$. 

What's Next Beyond Variance Reduction?

There are many works that tried to extend SVRG to other settings. Most of them are no-so-interesting tweaks, but there are three fundamental extensions. 
  • Shalev-Shwartz first studied Problem \ref{eqn:the-problem} but each $f_i(x)$ is non-convex (although the summation $f(x)$ is convex). He showed that SVRG also works there, and this has been later better formalized and slightly improved. This class of problems has given rise to the fastest low-rank solvers (in theory) on SVD and related problems.
  • Elad and I showed that SVRG also works for totally non-convex functions $F(x)$. This is independently discovered by another group of authors. They published at least two more papers on this problem too, one supporting proximal, and one proving SAGA's variant.
The above two improvements are regarding what will happen if we enlarge the class of Problem \ref{eqn:the-problem}. The next improvement is regarding the same Problem \ref{eqn:the-problem} but an even faster running time:

Wednesday, July 6, 2016

More than a decade of online convex optimization

This nostalgic post is written after a tutorial in ICML 2016 as a recollection of a few memories with my friend Satyen Kale.

In ICML 2003 Zinkevich published his paper "Online Convex Programming and Generalized Infinitesimal Gradient Ascent" analyzing the performance of the popular gradient descent method in an online decision-making framework.

The framework addressed in his paper was an iterative game, in which a player chooses a point in a convex decision set, an adversary chooses a cost function, and the player suffers the cost which is the value of the cost function evaluated at the point she chose. The performance metric in this setting is taken from game theory: minimize the regret of the player - which is defined to be the difference of the total cost suffered by the player and that of the best fixed decision in hindsight.

A couple of years later, circa 2004-2005, a group of theory students at Princeton decide to hedge their bets in the research world. At that time, finding an academic position in theoretical computer science was extremely challenging, and looking at other options was a reasonable thing to do. These were the days before the financial meltdown, when a Wall-Street job was the dream of Ivy League graduates.

In our case - hedging our bets meant taking a course in finance at the ORFE department and to look at research problems in finance. We fell upon Tom Cover's timeless paper "universal portfolios" (I was very fortunate to talk with the great information theorist a few years later in San Diego and him tell about his influence in machine learning).  As good theorists, our first stab at the problem was to obtain a polynomial time algorithm for universal portfolio selection, which we did. Our paper didn't get accepted to the main theory venues at the time, which turned out for the best in hindsight, pun intended :-)

Cover's paper on universal portfolios was written in the language of information theory and universal sequences, and applied to wealth which is multiplicatively changing. This was very different than the additive, regret-based and optimization-based paper of Zinkevich.

One of my best memories of all times is the moment in which the connection between optimization and Cover's method came to mind. It was more than a "guess" at first:  if online gradient descent is effective in online optimization, and if Newton's method is even better for offline optimization, why can we use Newton's method in the online world?  Better yet - why can't we use it for portfolio selection?

It turns out that indeed it can, thereby the Online Newton Step algorithm came to life, applied to portfolio selection, and presented in COLT 2016 (along with a follow-up paper devoted only to portfolio selection, with Rob Schapire.  Satyen and me had the nerve to climb up to Rob's office and waste his time for hours at a time, and Rob was too nice to kick us out...).

The connection between optimization, online learning, and the game theoretic notion of regret has been very fruitful since, giving rise to a multitude of applications, algorithms and settings. To mention a few areas that spawned off:

  • Bandit convex optimization - in which the cost value is the only information available to the online player (rather than the entire cost function, or its derivatives).
    This setting is useful to model a host of limited-observation problems common in online routing and reinforcement learning.
  • Matrix learning (also called "local learning") - for capturing problems such as recommendation systems and the matrix completion problem, online gambling and online constraint-satisfaction problems such as online max-cut.
  • Projection free methods - motivated by the high computational cost of projections of first order methods, the Frank-Wolfe algorithm came into renewed interest in recent years. The online version is particularly useful for problems whose decision set is hard to project upon, but easy to perform linear optimization over. Examples include the spectahedron for various matrix problems, the flow polytope for various graph problems, the cube for submodular optimization, etc.
     
  • Fast first-order methods - the connection of online learning to optimization introduced some new ideas into optimization for machine learning. One of the first examples is the Pegasus paper. By now there is a flurry of optimization papers in each and every major ML conference, some incorporate ideas from online convex optimization such as adaptive regularization, introduced in the AdaGrad paper. 
There are a multitude of other connections that should be mentioned here, such as the recent literature on adversarial MDPs and online learning, connections to game theory and equilibrium in online games, and many more. For more (partial) information, see our tutorial webpage and this book draft

It was a wild ride!  What's next in store for online learning?  Some exciting new directions in future posts...



Saturday, June 4, 2016

How to solve classification and regression fast, faster, and fastest

(post by Zeyuan Allen-Zhu)

I am often asked what is the best algorithm to solve SVM, to solve Lasso Regression, to solve Logistic Regression, etc. At the same time, a growing number of first-order methods have been recently proposed, making it hard to track down the state-of-the-arts. I feel it perhaps a good idea to have a blog post to answer all these questions properly and simultaneously.

Consider the general problem of empirical risk minimization (ERM):
\begin{equation}\label{eqn:primal}
\min_{x\in \mathbb{R}^d} \Big\{ F(x) := \frac{1}{n} \sum_{i=1}^n f_i (\langle a_i, x \rangle) + \psi(x) \Big\}\tag{1}\end{equation}
Here, each $a_i \in \mathbb{R}^d$ can be viewed as a feature vector, each $f_i(\cdot)$ is a unvariate loss function, and $\psi(x)$ can be viewed as regularizers such as the $\lambda \|x\|_1$ or $\frac{\lambda}{2}\|x\|_2^2$. There are naturally four classes of interesting classification or regression problems that fit into the above framework. Namely,
  • Case 1: $f_i(x)$ is smooth and $F(x)$ is strongly convex. Example: ridge regression.
  • Case 2: $f_i(x)$ is smooth and $F(x)$ is weakly convex. Example: Lasso regression.
  • Case 3: $f_i(x)$ is non-smooth and $F(x)$ is strongly convex. Example: SVM.
  • Case 4: $f_i(x)$ is non-smooth and $F(x)$ is weakly convex. Example: L1-SVM.
Somewhat surprisingly, it is not necessary to design an algorithm to solve each of the four cases above. For instance, one can "optimally" transform an algorithm solving Case 1 to algorithms solving Case 2,3 and 4 (link to paper). Since full gradient based methods are too slow for large-scale machine learning, in this post I'll summarize only stochastic methods.

[[edit remrak: one may also consider a few other interesting cases, including: Case 5, $f_i(x)$ is non-convex but $F(x)$ is strongly convex; Case 6, $f_i(x)$ is non-convex but $F(x)$ is weakly convex; Case 7, $f_i(x)$ is non-convex and $F(x)$ is non-convex too. Case 5 was first studied by Shai Shalev-Shwartz in this paper. I have slightly better results for Case 5 and 6 in this paper. As for Case 7, a recent progress is this paper.]]

Running-Time Summaries

There are three classes of stochastic first-order methods, and the best known running times are respectively:
Column 1: SGD Method (fast) Column 2: Non-Accelerated Methods (faster) Column 3: Accelerated Methods (fastest)
Case 1 $O\Big(\frac{G d}{\sigma \varepsilon}\Big)$ $O\Big(\big(nd + \frac{L d}{\sigma}\big) \log\frac{1}{\varepsilon} \Big)$ $O\Big(\big(nd + \frac{\sqrt{n L} d}{\sigma}\big) \log\frac{1}{\varepsilon} \Big)$
Case 2 $O\Big(\frac{G d}{\varepsilon^2}\Big)$ $O\Big(nd \log\frac{1}{\varepsilon} + \frac{L d}{\varepsilon} \Big)$ $O\Big(nd \log\frac{1}{\varepsilon} + \frac{\sqrt{n L} d}{\sqrt{\varepsilon}} \Big)$
Case 3 $O\Big(\frac{G d}{\sigma \varepsilon}\Big)$ $O\Big(nd \log\frac{1}{\varepsilon} + \frac{G d}{\sigma \varepsilon} \Big)$ (useless) $O\Big(nd \log\frac{1}{\varepsilon} + \frac{\sqrt{n G} d}{\sqrt{\sigma \varepsilon}} \Big)$
Case 4 $O\Big(\frac{G d}{\varepsilon^2}\Big)$ $O\Big(nd \log\frac{1}{\varepsilon} + \frac{G d}{\varepsilon^2} \Big)$ (useless) $O\Big(nd \log\frac{1}{\varepsilon} + \frac{\sqrt{n G} d}{\varepsilon} \Big)$
In the above table, $L$ is the smoothness parameter for Cases 1/2, and (square root of) $G$ is the Lipschitz continuity parameter of $f_i$ for Cases 3/4, and $\sigma$ is the strong convexity parameter for Cases 1/3. It is clear from the table above that non-accelerated methods should not be used for cases 3 and 4, and also clear (using AM-GM inequality) that column 3 is faster than column 2.

If one simply wants to know how to obtain such running times, then the short answer is.
  • SGD results are more-or-less folklore, see for instance Section 3 of Elad Hazan's text book.
  • Non-accelerated results: Case 1 is first obtained by SAG to the best of my knowledge. Case 2 is first obtained by Yang and me (by shaving off a log factor from SVRG). Case 3/4 are not interesting but anyways obtainable for instance using this reduction. In practice, SVRG and SVRG++ indeed outperform SGD for Cases 1 and 2 based on my experience.
  • Accelerated results: The tightest results for Cases 1/2/3/4 are first obtained by the new method Katyusha. If one is willing to lose a few log factors, these cases were first obtained by AccSDCA in 2013. Based on my experience, at least for Cases 1+2, Katyusha seems to give the best practical performance. For Cases 3+4, I would suggest APCG (see later).
If one is interested in, from a high level, how such results are obtained, I categorize all existing methods into dual-only methods, primal-only methods, and primal-dual methods.

Dual-Only Methods (SDCA, APCG, etc.)

Due to technical reasons, it is easier (and somehow earlier in history) to study ERM problems from the dual perspective. The dual problem of \eqref{eqn:primal}:
\begin{equation}\label{eqn:dual} \min_{y \in \mathbb{R}^n} \Big\{ D(y) := \frac{1}{n}\sum_{i=1}^n f_i^*(y_i) + r^*\Big(-\frac{1}{n} \sum_{i=1}^n y_i a_i \Big) \Big\}
\tag{2}
\end{equation}
Above, $f_i^*$ and $r^*$ are respectively the so-called Fenchel dual of $f_i$ and $r$ respectively. For starters, Fenchel duals are easily computable for most applications; as a concrete example, if $r(x) = \frac{\lambda}{2}\|x\|_2^2$ is the L2 regularizer, then $r^*(x) = \frac{1}{2\lambda} \|x\|_2^2$. Section 5 of this paper provides lots of examples.

Note that the dual objective $D(y)$ is undefined for Cases 2 and 4. Although $D(y)$ is defined for Case 3, it is in theory impossible to translate an approximate dual solution $y$ to a primal solution. For such reasons, in theory people directly analyze how to minimize $D(y)$ for Case 1, and remember, such an algorithm can be turned into solvers for Cases 2/3/4 through reduction.

One can prove that if $F(x)$ is $\sigma$-strongly convex, then $D(y)$ is $(1/n + 1/sigma n^2)$-smooth with respect to each coordinate. For this reason, one can apply coordinate descent to minimize $D(y)$ directly, which results in the SDCA method; or apply accelerated coordinate descent to minimize $D(y)$ directly, which results in the APCG method. Note that SDCA is a non-accelerated method (column 2 of the table) and APCG is an accelerated method (column 3).

Although (accelerated or not) coordinate descent was well-known in optimization, it is not immediately clear how to apply them to minimize $D(y)$ in \eqref{eqn:dual} at a first glance, mainly due to the existence of the Fenchel conjugates. The APCG paper provides a nice summary for beginners. 

I have another blog post discussing coordinate descent methods in details.

Primal-Only Methods (SGD, SVRG, etc.)

A stochastic method is call primal if it directly computes $f_i'(\cdot)$ for one random sample $i$, and update $x$ accordingly. Primal-only methods are more desirable for lots of reasons. First, one primal methods avoid the accuracy loss when turning from dual variables to primal. Second, one can avoid the potentially involved Fenchel computation. Third, consider the more general problem of \eqref{eqn:primal} by replacing each $f_i(\langle a_i, x\rangle)$ with the more general form $f_i(x)$, this problem can only be solved by primal-only methods because its dual (in some sense) does not exist.

The first stochastic method is stochastic gradient descent (SGD). Ignoring the existence of the regularizer $\psi(x)$, the SGD method iteratively updates $x_{k+1} \gets x_k - \eta \cdot f_i' (\langle a_i, x\rangle) a_i$ where $\eta$ is some learning rate and $i\in [n]$ is a random sample. It is clear from this formulation that SGD replaces the full gradient $\nabla := \frac{1}{n} \sum_{i=1}^n f_i' (\langle a_i, x\rangle) \cdot a_i$ with an unbiased random sample $\tilde{\nabla} := f_i' (\langle a_i, x\rangle) \cdot a_i$, which is $n$ times faster to compute.

The first theoretical breakthrough on primal-only methods is SAG (to the best of my knowledge), which introduces the variance-reduction technique in order to match the running time of SDCA (and thus column 2 of the table). Approximately a year later, SVRG and SAGA are introduced to replace SAG with not only a simpler proof, but much better practical performance. The key idea of these methods is to design a better $\tilde{\nabla}$ so that its expectation $\mathbb{E}[\tilde{\nabla}]$ stills equals to the full gradient $\nabla$, but somehow approaching to zero. Unfortunately, all these variance-reduction based methods only match the non-accelerated running time (column 2).

In this March 2016, I finally obtained the first accelerated method (thus column 3) that is primal-only. The key idea there is to introduce a momentum plus a negative momentum on top of variance-reduced $\tilde{\nabla}$. Interested readers can find my method here, and I plan to give a more detailed survey on variance-reduction based methods later this summer.

Primal-Dual Methods (SPDC)

One can also solve \eqref{eqn:primal} from a saddle-point perspective. Consider
$$ \min_{x\in \mathbb{R}^d} \max_{y \in \mathbb{R}^n} \Big\{ \phi(x,y) := \frac{1}{n} y^T A x + \psi(x) - \frac{1}{n}\sum_{i=1}^n f_i^*(y_i) \Big\}$$
where $A = [a_1,\dots,a_n]^T \in \mathbb{R}^{n\times d}$ is the data matrix. One can prove that $F(x) = \max_y \phi(x,y)$ and $D(y) = - \min_x \phi(x,y)$, and therefore to solve the original ERM problem it suffices to solve this saddle-point problem.

In 2014, Zhang and Xiao provided an accelerated, stochastic method SPDC (column 3) that directly solves this saddle-point problem. It is built on the so-called accelerated primal-dual methods of Chambolle and Pock, which I also plan to write a blog post about it some time in the future. Unfortunately, I know people who report to me that the parameter tuning steps for SPDC may be too complicated in practice, but I haven't tried it myself so I can't say for sure.

Final Remarks

In the above summary I have injected lots of my own (and perhaps controversial) opinions into the discussions above. For instance, one can also regard SDCA as a primal-dual methods because it somehow "also maintains primal variables". At the same time, the running times provided in my table are tight and the (almost) matching lower bounds recently shown by Woodworth and Srebro. Finally, let me conclude with a performance plot comparing Katyusha to other state-of-the-arts:

Thursday, May 26, 2016

The complexity zoo and reductions in optimization

(post by Zeyuan Allen-Zhu and Elad Hazan)

The following dilemma is encountered by many of my friends when teaching basic optimization: which variant/proof of gradient descent should one start with? Of course, one needs to decide on which depth of convex analysis one should dive into, and decide on issues such as "should I define strong-convexity?", "discuss smoothness?", "Nesterov acceleration?", etc.

This is especially acute for courses that do not deal directly with optimization, which is described as a tool for learning or as a basic building block for other algorithms. Some approaches:
  • I teach online gradient descent, in the context of online convex optimization, and then derive the offline counterpart. This is non-standard, but permits an extremely simple derivation and gets to the online learning component first. 
  • Sanjeev Arora teaches basic offline GD for the smooth and strongly-convex case first. 
  • In OR/optimization courses the smooth (but not strongly-convex) case is many times taught first. 
All of these variants have different proofs whose connections are perhaps not immediate. If one wishes to go into more depth, usually in convex optimization courses one covers the full spectrum of different smoothness/strong-convexity/acceleration/stochasticity regimes, each with a separate analysis (a total of 16 possible configurations!)

This year I've tried something different in COS511 @ Princeton, which turns out also to have research significance. We've covered basic GD for well-conditioned functions, i.e. smooth and strongly-convex functions, and then extended these result by reduction to all other cases!
A (simplified) outline of this teaching strategy is given in chapter 2 of this book.

Classical Strong-Convexity and Smoothness Reductions

Given any optimization algorithm A for the well-conditioned case (i.e., strongly convex and smooth case), we can derive an algorithm for smooth but not strongly functions as follows.

Given a non-strongly convex but smooth objective $f$, define a objective by
$$ \text{strong-convexity reduction:} \qquad f_1(x) = f(x) +  \epsilon \|x\|^2 $$
It is straightforward to see that $f_1$ differs from $f$ by at most $\epsilon$ times a distance factor, and in addition it is $\epsilon$-strongly convex. Thus, one can apply $A$ to minimize $f_1$ and get a solution which is not too far from the optimal solution for $f$ itself.  This simplistic reduction yields an almost optimal rate, up to logarithmic factors.

Similar simplistic assumptions can be derived for (finite-sum forms of) non-smooth by strongly-convex functions (via randomized smoothing or Fenchel duality), and for functions that are neither smooth nor strongly-convex by just applying both reductions simultaneously. Notice that such classes of functions include famous machine learning problems such as SVM, Logistic Regression, SVM, L1-SVM, Lasso, and many others.

Necessity of Reductions

This is not only a pedagogical question. In fact, very few algorithms apply to the entire spectrum of strong-convexity / smoothness regimes, and thus reductions are very often intrinsically necessary. To name a few examples,
  • Variance-reduction methods such as SAG, SAGA and SVRG require the objective to be smooth, and do not work for non-smooth problems like SVM. This is because for loss functions such as hinge loss, no unbiased gradient estimator can achieve a variance that approaches to zero
  • Dual methods such as SDCA or APCG require the objective to be strongly convex, and do not directly apply to non-strongly convex problems. This is because for non-strongly convex objectives such as Lasso, their duals are not even be well-defined.
  • Primal-dual methods such as SPDC require the objective to be both smooth and SC.

Optimality and Practicality of Reductions

The folklore strong-convexity and smoothing reductions are suboptimal. Focusing on the strong-convexity reduction for instance:
  • It incurs a logarithmic factor log(1/ε) in the running time so leading to slower algorithms than direct methods. For instance, if $f(x)$ is smooth but non-strongly convex, gradient descent minimizes it to an $\epsilon$ accuracy in $O(1/\epsilon)$ iterations; if one uses the reduction instead, the complexity becomes $O(\log(1/\epsilon)/\epsilon)$.
  • More importantly, algorithms based on such reductions become biased: the original objective value $f(x)$ does not converge to the global minimum, and if the desired accuracy is changed, one has to change the weight of the regularizer $\|x\|^2$ and restart the algorithm. 
These theoretical concerns also translate into running time losses and parameter tuning difficulties in practice. For such reasons, researchers usually make efforts on designing unbiased methods instead.
  • For instance, SVRG in theory solves the Lasso problem (with smoothness L) in time $O((nd+\frac{Ld}{\epsilon})\log\frac{1}{\epsilon})$ using reduction. Later, direct and unbiased methods for Lasso are introduced, including SAGA which has running time $O(\frac{nd+Ld}{\epsilon})$, and SVRG++ which has running time $O(nd \log\frac{1}{\epsilon} + \frac{Ld}{\epsilon})$.
One can find academic papers derive various optimization improvements many times for only one of the settings, leaving the other settings desirable. An optimal and unbiased black-box reduction is thus a tool to extend optimization algorithms from one domain to the rest.

Optimal, Unbiased, and Practical Reductions

In this paper, we give optimal and unbiased reductions. For instance, the new reduction, when applied to SVRG, implies the same running time as SVRG++ up to constants, and is unbiased so converges to the global minimum. Perhaps more surprisingly, these new results imply new theoretical results that were not previously known by direct methods. To name two of such results:
  • On Lasso, it gives an accelerated running time $O(nd \log\frac{1}{\epsilon} + \frac{\sqrt{nL}d}{\sqrt{\epsilon}})$ where the best known result was not only an biased algorithm but also slower $O( (nd + \frac{\sqrt{nL}d}{\sqrt{\epsilon}}) \log\frac{1}{\epsilon} )$.
  • On SVM with strong convexity $\sigma$, it gives an accelerated running time $O(nd \log\frac{1}{\epsilon} + \frac{\sqrt{n}d}{\sqrt{\sigma \epsilon}})$ where the best known result was not only an biased algorithm but also slower $O( (nd + \frac{\sqrt{n}d}{\sqrt{\sigma \epsilon}}) \log\frac{1}{\epsilon} )$.
These reductions are surprisingly simple. In the language of strong-convexity reduction, the new algorithm starts with a regularizer $\lambda \|x\|^2$ of some large weight $\lambda$, and then keeps halving it throughout the convergence. Here, the time to decrease $\lambda$ can be either decided by theory or by practice (such as by computing duality gap).

A figure to demonstrate the practical performance of our new reduction (red dotted curve) as compared to the classical biased reduction (blue curves, with different regularizer weights) are presented in the figure below.


As a final word - if you were every debating whether to post your paper on ArXiV, yet another example of how quickly it helps research propagate:  only a few weeks after our paper was made available online, Woodworth and Srebro have already made use of our reductions in their new paper.