This nostalgic post is written after a tutorial in ICML 2016 as a recollection of a few memories with my friend Satyen Kale.

In ICML 2003 Zinkevich published his paper "Online Convex Programming and Generalized Infinitesimal Gradient Ascent" analyzing the performance of the popular gradient descent method in an online decision-making framework.

The framework addressed in his paper was an iterative game, in which a player chooses a point in a convex decision set, an adversary chooses a cost function, and the player suffers the cost which is the value of the cost function evaluated at the point she chose. The performance metric in this setting is taken from game theory: minimize the

A couple of years later, circa 2004-2005, a group of theory students at Princeton decide to hedge their bets in the research world. At that time, finding an academic position in theoretical computer science was extremely challenging, and looking at other options was a reasonable thing to do. These were the days before the financial meltdown, when a Wall-Street job was the dream of Ivy League graduates.

In our case - hedging our bets meant taking a course in finance at the ORFE department and to look at research problems in finance. We fell upon Tom Cover's timeless paper "universal portfolios" (I was very fortunate to talk with the great information theorist a few years later in San Diego and him tell about his influence in machine learning). As good theorists, our first stab at the problem was to obtain a polynomial time algorithm for universal portfolio selection, which we did. Our paper didn't get accepted to the main theory venues at the time, which turned out for the best in hindsight, pun intended :-)

Cover's paper on universal portfolios was written in the language of information theory and universal sequences, and applied to wealth which is multiplicatively changing. This was very different than the additive, regret-based and optimization-based paper of Zinkevich.

One of my best memories of all times is the moment in which the connection between optimization and Cover's method came to mind. It was more than a "guess" at first: if online gradient descent is effective in online optimization, and if Newton's method is even better for offline optimization, why can we use Newton's method in the online world? Better yet - why can't we use it for portfolio selection?

It turns out that indeed it can, thereby the Online Newton Step algorithm came to life, applied to portfolio selection, and presented in COLT 2016 (along with a follow-up paper devoted only to portfolio selection, with Rob Schapire. Satyen and me had the nerve to climb up to Rob's office and waste his time for hours at a time, and Rob was too nice to kick us out...).

The connection between optimization, online learning, and the game theoretic notion of regret has been very fruitful since, giving rise to a multitude of applications, algorithms and settings. To mention a few areas that spawned off:

In ICML 2003 Zinkevich published his paper "Online Convex Programming and Generalized Infinitesimal Gradient Ascent" analyzing the performance of the popular gradient descent method in an online decision-making framework.

The framework addressed in his paper was an iterative game, in which a player chooses a point in a convex decision set, an adversary chooses a cost function, and the player suffers the cost which is the value of the cost function evaluated at the point she chose. The performance metric in this setting is taken from game theory: minimize the

**regret**of the player - which is defined to be the difference of the total cost suffered by the player and that of the best**fixed**decision in hindsight.A couple of years later, circa 2004-2005, a group of theory students at Princeton decide to hedge their bets in the research world. At that time, finding an academic position in theoretical computer science was extremely challenging, and looking at other options was a reasonable thing to do. These were the days before the financial meltdown, when a Wall-Street job was the dream of Ivy League graduates.

In our case - hedging our bets meant taking a course in finance at the ORFE department and to look at research problems in finance. We fell upon Tom Cover's timeless paper "universal portfolios" (I was very fortunate to talk with the great information theorist a few years later in San Diego and him tell about his influence in machine learning). As good theorists, our first stab at the problem was to obtain a polynomial time algorithm for universal portfolio selection, which we did. Our paper didn't get accepted to the main theory venues at the time, which turned out for the best in hindsight, pun intended :-)

Cover's paper on universal portfolios was written in the language of information theory and universal sequences, and applied to wealth which is multiplicatively changing. This was very different than the additive, regret-based and optimization-based paper of Zinkevich.

One of my best memories of all times is the moment in which the connection between optimization and Cover's method came to mind. It was more than a "guess" at first: if online gradient descent is effective in online optimization, and if Newton's method is even better for offline optimization, why can we use Newton's method in the online world? Better yet - why can't we use it for portfolio selection?

It turns out that indeed it can, thereby the Online Newton Step algorithm came to life, applied to portfolio selection, and presented in COLT 2016 (along with a follow-up paper devoted only to portfolio selection, with Rob Schapire. Satyen and me had the nerve to climb up to Rob's office and waste his time for hours at a time, and Rob was too nice to kick us out...).

The connection between optimization, online learning, and the game theoretic notion of regret has been very fruitful since, giving rise to a multitude of applications, algorithms and settings. To mention a few areas that spawned off:

- Bandit convex optimization - in which the cost value is the only information available to the online player (rather than the entire cost function, or its derivatives).

This setting is useful to model a host of limited-observation problems common in online routing and reinforcement learning. - Matrix learning (also called "local learning") - for capturing problems such as recommendation systems and the matrix completion problem, online gambling and online constraint-satisfaction problems such as online max-cut.
- Projection free methods - motivated by the high computational cost of projections of first order methods, the Frank-Wolfe algorithm came into renewed interest in recent years. The online version is particularly useful for problems whose decision set is hard to project upon, but easy to perform linear optimization over. Examples include the spectahedron for various matrix problems, the flow polytope for various graph problems, the cube for submodular optimization, etc.

- Fast first-order methods - the connection of online learning to optimization introduced some new ideas into optimization for machine learning. One of the first examples is the Pegasus paper. By now there is a flurry of optimization papers in each and every major ML conference, some incorporate ideas from online convex optimization such as adaptive regularization, introduced in the AdaGrad paper.

There are a multitude of other connections that should be mentioned here, such as the recent literature on adversarial MDPs and online learning, connections to game theory and equilibrium in online games, and many more. For more (partial) information, see our tutorial webpage and this book draft.

It was a wild ride! What's next in store for online learning? Some exciting new directions in future posts...

This will give your duplicate a more expert appearance. The essayist can help you by making duplicate which is instructive and valuable to your perusers but at the same time is elegantly composed, clear, compact and straightforward. pop over to these guys

ReplyDeleteThank you for taking the time to publish this information very useful! play motu patlu games

ReplyDeleteThe way you look play slots is great, but I'd rather read some slots tips before challenging the system. Because some things ave been already tested.

ReplyDeleteYour post is providing good information. I liked it and enjoyed reading it. Keep sharing such important posts. I am very much pleased with the contents you have mentioned. I wanted to thank you for this great article. visit website

ReplyDeleteOn the off chance that you check the historical backdrop https://okdissertations.com/ of the individuals who regardless of dropping out or ending tutoring have turned out to be fruitful,

ReplyDeleteBe on you best conduct. Look and notice great. Initial introductions are enduring impressions. horse racing competition

ReplyDeleteIt proved to be Very helpful to me and I am sure to all the commentators here! number for SMS online

ReplyDeleteThis is additionally a decent open door for the individuals who might not have essay review had the opportunity to complete school on time.

ReplyDeletebowel tract and people Cla Safflower Oil struggling obesity. So she developed a number of natural remedies for the elimination of these harmful, even lifestyles-threatening plaques and hastily generating digestive.

ReplyDeletehttp://www.drozhealthblog.com/cla-safflower-oil/

Hi! Thanks for the great information you have provided! You have touched on crucial points! Send Gifts To Pakistan

ReplyDeleteSend Valentine Day Gift to Pakistan

You may do something for a week Biogenic Xr or two, not see much or any consequences, after which stop Do no longer do that. Live consistent with it.

ReplyDeletehttp://www.healthprograme.com/biogenic-xr-reviews/

The main component of the pill Biogenic XR is Pomegranate ellagic acid, and unlike different tablets it encourages each increase enlargement and complements ual libido.

ReplyDeletehttp://xtrfact.com/biogenic-xr-uk/

Client weight reduction product Cla Safflower Oil review on green tea green tea is understood to hurry up your metabolism, that's useful for burning energy and fats.

ReplyDeletehttp://www.drozhealthblog.com/cla-safflower-oil/

If you're nonetheless not convinced Testro T3 please read on to discover more of the primary information extagen does not infringe in your busy schedule.

ReplyDeletehttp://www.drozhealthblog.com/testro-t3-uk/

What might be the ideal Testx Core to one lady might be too little or too enormous for another. Be that as it may, is that extremely the case.

ReplyDeletehttp://www.usadrugguide.com/testx-core-review/

Natural skin care products are Perlelux powerful and frequently cost much less. they may be tested to cause exceptional results and are well known due to this.

ReplyDeletehttp://www.usadrugguide.com/perlelux-ca/

You'll need to fanatically hold CLA Safflower Oil fast to your fat liquefying plan, in the event that you are not kidding about getting thinner.

ReplyDeletehttp://xtrfact.com/cla-safflower-oil/

the majority of us will surrender CLA Safflower Oil to this false philosophy that looking thin is 'in' exclusively drinking home grown teas will accomplish this.

ReplyDeletehttp://xtrfact.com/cla-safflower-oil/

Customers who go to the top Dermagen IQ of the line stores for their most loved Dermagen IQ for UK items comprehend what they need and wouldn't fret paying.

ReplyDeletehttp://www.usadrugguide.com/dermagen-iq/