Research

My research includes theoretical, methodological, and applied contributions to various problems in business analytics. On the application side, I am generally interested in resource allocation problems in revenue management, logistics, marketing and e-commerce. My theoretical and methodological work primarily focuses on optimal learning in stochastic optimization. Below are some highlights from these various areas.

Major areas of application

  • Non-profit management. I worked with the American Red Cross to improve direct-mail fundraising campaigns designed to cultivate and retain disaster donors. In a large-scale empirical study (published in Management Science, 2016) of over 8 million recorded interactions with donors, we applied statistical learning techniques to identify design elements that positively impacted response rates.
  • Business-to-business pricing. Consider a negotiation between a supplier of raw materials (the seller) and a manufacturer (the buyer), which ends in a final price offer named by the seller. If the price is rejected, the seller incurs a high opportunity cost; if the price is accepted, the seller is left wondering whether a higher price would also have worked. To complicate matters, the seller may face the task of pricing a large number of heterogeneous products.
  • Humanitarian logistics. International humanitarian organizations typically use small fleets of vehicles to complete large numbers of missions. Both vehicles and missions are heterogeneous, with multiple varying attributes. To minimize operating costs, it is necessary to develop dynamic assignment policies that optimally match vehicles to missions.
  • Transportation. "Flow capture" is a problem class where the goal is to optimally place facilities to intercept traffic flows moving through a highway network. In many applications, however, traffic flows are not static and may react to the facilities by adjusting their routes in an evasive manner (for instance, truckers may deviate from their shortest paths in order to avoid weigh-in-motion sensors). Our published research, which won the 2015 Glover-Klingman Prize, presents the first optimization model specifically intended to intercept evasive flows.

Methodological contributions

I have published a number of papers developing an algorithmic approach to optimal learning known variously as "expected improvement," "knowledge gradient," and "value of information." Essentially this is the classic decision-theoretic concept of value of information integrated into different types of optimization models. First, we are faced with an optimization problem (for example, a linear program or dynamic program) in which some key parameters (for example, costs or objective coefficients) are unknown. However, we have the ability to learn about the problem by conducting sequential experiments or simulations, whose outcomes may be used to estimate the unknown parameters. By collecting more information, we can find a better solution to the original problem.

Expected improvement (EI) is a Bayesian optimization methodology that allows us to quantify the impact of information on the economic value of the optimization problem. In so doing, we are able to make an explicit tradeoff between the actual economic outcomes of our decisions (revenue earned or costs incurred) and the information collected from observing those outcomes. EI methods are attractive for their adaptability to a wide variety of optimization models, such as linear programs, network models, regression-based optimization, dynamic programs, and simulation-based optimization.

Please see my publications for a comprehensive list of these papers. Some representative work from this stream includes:

  • Han, B., Ryzhov, I.O. & Defourny, B. (2016) "Optimal learning in linear regression with combinatorial feature selection." INFORMS Journal on Computing 28(4), 721-735. PDF
  • Ryzhov, I.O. & Powell, W.B. (2012) "Information collection for linear programs with uncertain objective coefficients." SIAM Journal on Optimization 22(4), 1344-1368. PDF
  • Ryzhov, I.O. & Powell, W.B. (2011) "Information collection on a graph." Operations Research 59(1), 188-201. PDF

More recently, I have become acutely aware of the close link between statistics and optimization. Most of the work on optimal learning assumes simple statistical models that can be easily computed and updated; the focus is then shifted toward the development of efficient optimization procedures that take parameters from these models as inputs, and return recommended decisions.

In many situations, however, information is censored or incomplete, making it difficult to apply standard statistical tools. In such cases, the methodology of approximate Bayesian inference is an extremely promising approach for developing compact representations of beliefs. See the award-winning paper:

  • Qu, H., Ryzhov, I.O., Fu, M.C. & Ding, Z. (2015) "Sequential selection with unknown correlation structures." Operations Research 63(4), 931-948. PDF

In recent work with my student Ye Chen, we have developed a new theoretical framework for proving the consistency of approximate Bayesian estimators.

Theoretical contributions

As described above, the expected improvement methodology is attractive for its computational efficiency and good practical performance in a wide variety of settings. At the same time, it has proved largely unamenable to the usual forms of theoretical analysis employed in the optimal learning literature. In particular, the convergence rates of EI algorithms have been an open problem for nearly 20 years. My recent paper solves this problem in the context of stochastic ranking and selection with known and unknown variance:

  • Ryzhov, I.O. (2016) "On the convergence rates of expected improvement methods." Operations Research 64(6), 1515-1528. PDF