Poster |
---|

## Scalable Sparse PCA: A Tractable MIP under Statistical AssumptionsPresenter: Kayhan BehdinAffiliation: MITCo-authors: Rahul Mazumder |

## Lower Bounds on the Size of General Branch-and-Bound TreesPresenter: Yatharth DubeyAffiliation: Georgia TechCo-authors: Santanu S. Dey, Marco MolinaroAbstract:
A general branch-and-bound tree is a branch-and-bound tree which is allowed to use general disjunctions of the form $\pi^{\top} x \leq \pi_0 \,\vee\, \pi^{\top}x \geq \pi_0 + 1$, where $\pi$ is an integer vector and $\pi_0$ is an integer scalar, to create child nodes. We construct a packing instance, a set covering instance, and a Traveling Salesman Problem instance, such that any general branch-and-bound tree that solves these instances must be of exponential size. We also verify that an exponential lower bound on the size of general branch-and-bound trees persists when we add Gaussian noise to the coefficients of the cross polytope, thus showing that polynomial-size “smoothed analysis” upper bound is not possible. The results in this paper can be viewed as the branch-and-bound analog of the seminal paper by Chvatal et al., who proved lower bounds for the Chvatal-Gomory rank. |

## Two-Stage Distributionally Robust Optimization in Fixed-Charge TransportationPresenter: Mohamed El TonbariAffiliation: Georgia Institute of TechnologyCo-authors: Alejandro Toriello and George NemhauserAbstract:
We are motivated by natural disaster applications where data is limited. We solve a two-stage distributionally robust optimization model with a Wasserstein ambiguity set, where the first stage is a facility location problem and the second stage is a fixed-charge transportation problem. We develop a column and constraint generation algorithm and handle the presence of binary variables in the second stage by leveraging the structure of our support set and of the second stage value function. |

## Copositive Duality for Discrete Markets and GamesPresenter: Cheng GuoAffiliation: University of TorontoCo-authors: Merve Bodur, Joshua A. TaylorAbstract:
Optimization problems with discrete decisions are nonconvex and lack strong duality. It was shown in Burer(2009) that mixed-binary quadratic programs can be written as completely positive programs, which are convex. Completely positive reformulations of discrete optimization problems therefore have strong duality. We apply this perspective in two application problems: nonconvex energy markets and integer programming games, obtaining several novel results. To facilitate implementation, we also design a cutting plane algorithm for solving copositive programs exactly. |

## The Price of Anarchy in Series-Parallel Network Congestion GamesPresenter: Bainian HaoAffiliation: University of Wisconsin-MadisonCo-authors: Carla MichiniAbstract:
We study the inefficiency of pure Nash equilibria in symmetric network congestion games defined over series-parallel networks with affine edge delays. We establish a bound of $2$ on the Price of Anarchy (PoA), which improves on the bound of $5/2$ valid for congestion games with affine delays on arbitrary networks. Fotakis (2010) showed that in the case of extension-parallel networks, a subclass of series-parallel networks, the PoA is $4/3$. This bound is not valid for series-parallel networks, since flow decompositions are in general not unique, and pure Nash equilibria might \emph{not} be global optima of the potential function. For an arbitrary Pure Nash Equilibrium flow $f$ in a series-parallel network, we define a greedy decomposition of $f$ into the single players’ strategies. We prove several properties greedy decompositions and we crucially exploit these properties to derive our main result. |

## Enumerating integer points in polytopes with bounded subdeterminants in polynomial timePresenter: Hongyi JiangAffiliation: Johns Hopkins UniversityCo-authors: Amitabh BasuAbstract:
We show that one can enumerate the vertices of the convex hull of integer points in polytopes whose constraint matrices have bounded and nonzero subdeterminants, in time polynomial in the dimension and encoding size of the polytope. This extends a previous result by Artmann et al. who showed that integer linear optimization in such polytopes can be done in polynomial time. |

## Fixed Parameter Tractable Algorithms for Lot-Sizing Problems with Multiple Capacities and Concave CostsPresenter: Kartik KulkarniAffiliation: Virginia TechCo-authors: Manish Bansal (Virginia Tech)Abstract:
We present fixed-parameter tractable algorithms for new generalizations of the classical capacitated lot-sizing problem in which the total production (or transportation) capacity in each time period is the summation of capacities of a subset of n available machines (or vehicles) of different capacities. We also present new algorithm for lot-sizing problem with piecewise concave production costs with m breakpoints. We also introduce extended formulation for the MCLS-S without subcontracting with n = 2 and Wagner-Whitin costs thereby generalizing the result of Pochet and Wolsey [1994, Math. Prog.] for n = 1. We also develop polynomial exact algorithms for two variants of MCLS where we assume “all-or-nothing” production in each time period, and allow backlogging. Computational results show that all the foregoing algorithms are efficient and stable, in comparison to off-the-shelf solvers. |

## Fair Network Design via Iterative RoundingPresenter: Aditi LaddhaAffiliation: Georgia Institute of TechnologyCo-authors: Mohit Singh, Santosh VempalaAbstract:
We show that the \emph{multi-criteria} Survivable Network Design Problem with $\ell \ge 2$ cost functions has an $\ell$-approximation algorithm, given access to an oracle that solves its LP relaxation. Consequently, the \emph{multi-criteria} generalized Steiner network problem with $\ell$ cost functions has a polynomial-time $\ell$-approximation algorithm. This generalizes the celebrated result of Jain for a single cost function. We also show that the approximation factor is tight for the natural LP relaxation. The problem is motivated by network design where multiple players have different or competing valuations and must agree on a common network (e.g., infrastructure design). |

## Faster Matchings via Learned Duals – Honorable MentionPresenter: Thomas LavastidaAffiliation: Carnegie Mellon UniversityCo-authors: Michael Dinitz, Sungjin Im, Benjamin Moseley, Sergei Vassilvitskii
Abstract:
A recent line of research investigates how algorithms can be augmented with machine-learned predictions to overcome worst case lower bounds. This area has revealed interesting algorithmic insights into problems, with particular success in the design of competitive online algorithms. While predominantly considered to better cope with uncertainties in the online setting, a clear target in the area is improving algorithm running times with predictions.
We take a first step in this direction by combining the idea of machine-learned predictions with the idea of “warm-starting” primal-dual algorithms. We consider one of the most important primitives in combinatorial optimization: weighted bipartite matching. We identify three key challenges when using learned dual variables in a primal-dual algorithm. First, predicted duals may be infeasible, so we give an algorithm that efficiently maps predicted infeasible duals to nearby feasible solutions. Second, once the duals are feasible, they may not be optimal, so we show that it can be used to quickly find an optimal solution. Finally, such predictions are useful only if they can be learned, so we show that learning duals for matching has low sample complexity, establishing they can be efficiently learned. We validate our theoretical findings through experiments on both real and synthetic data. As a result we give a rigorous, practical, and empirically effective method to compute bipartite matchings. |

## Proximity in Concave Integer Quadratic ProgrammingPresenter: Mingchen MaAffiliation: University of Wisconsin-MadisonCo-authors: Alberto Del PiaAbstract:
A classic result by Cook, Gerards, Schrijver, and Tardos provides an upper bound of n∆ on the proximity of optimal solutions of an Integer Linear Programming problem and its standard linear relaxation. In this bound, n is the number of variables and ∆ denotes the maximum of the absolute values of the subdeterminants of the constraint matrix. Hochbaum and Shanthikumar, and Wermanand and Magagnosc showed that the same upper bound is valid if a more general convex function is minimized, instead of a linear function. No proximity result of this type is known when the objective function is nonconvex. In fact, if we minimize a concave quadratic, no upper bound can be given as a function of n and ∆. Our key observation is that, in this setting, proximity phenomena still occur, but only if we consider also approximate solutions instead of optimal solutions only. In our main result we provide upper bounds on the distance between approximate (resp.,optimal) solutions to a Concave Integer Quadratic Programming problem and optimal (resp., approximate) solutions of its continuous relaxation. Our bounds are functions of n, ∆, and a parameter that controls the quality of the approximation. Furthermore, we discuss how far from optimal are our proximity bounds. |

## A Model for Increasing Voter Turnout through the Optimal Placement of Polling PrecinctsPresenter: Aven-Leigh SchnarrAffiliation: Bryn Athyn CollegeCo-authors: Abstract:
Scholarly research has found that there is evidence to support the conclusions that distance to polling locations can affect whether or not eligible individuals vote in an election (Haspel 561; McNulty 435; Brady 1) and that voter behavior can be modeled using the rational choice theory (Riker 38). During the research two conclusions were used to create a model to increase voter turnout through the optimal placement of potential voters’ polling locations. Using the assumption that the relationship between how far a potential voter lives from their designated polling place and their likelihood of voting is linear, this research created a linear model of this problem. |

## Planar Maximum Coverage Location Problem with Partial Coverage, Continuous Spatial Demand, and Adjustable Quality of ServicePresenter: Parshin ShojaeeAffiliation: Virginia TechCo-authors: Manis Bansal
Abstract:
Background: Over the years, many classes of problems related to locating service facilities to cover the demand have been studied (Church & Murray, 2013). Maximum Coverage Location Problem (MCLP) is one of the well-known classical facility location problems for locating multiple facilities having known service range with the objective of maximizing the total covered demand. The classical MCLP considers a finite set of pre-specified candidate positions for the facilities. However, there is a generalization of the classical MCLP, referred to as the planar MCLP (PMCLP), that considers locating all the facilities anywhere in a continuous two-dimensional plane. In most of the PMCLP literature, demands are represented as aggregated points, except two papers where demand zones are represented as line segments or polygons. However, these studies assume that demand zones (points, line segments, or polygons) are either completely covered or not covered at all by any service zone. This coverage assumption is referred to as “binary coverage”. The binary coverage is a simplifying assumption to model PMCLP as binary programs, but it ignores partial coverage of demand zones. Recently, (Bansal & Kianfar, 2017) addressed the binary coverage limitation and developed an exact algorithm for the PMCLP where “partial coverage” is allowed, and service and demand zones are defined by axis-parallel rectangles. However, they assumed that all facilities have fixed and same service range or quality of service (QoS), i.e., dimensions of all the rectangular service zones are fixed and same.
Our Contribution: We introduce a new generalization of the PMCLP in which partial coverage is allowed, facilities have adjustable QoS, and demand zones and service zone of each facility are represented by two-dimensional spatial objects such as rectangles, circles, polygons, etc. We denote this generalization by PMCLP-PC-QoS. A key challenge in this problem is to simultaneously decide position of multiple facilities on a continuous two-dimensional plane and their QoS. Also, the partial coverage is allowed in its true sense, i.e., when covering only part of a demand zone is allowed and the coverage accrued in the objective function as a result of this is proportional to the demand of the covered part only. We present a greedy algorithm and a pseudo-greedy algorithm for it, along with their approximation ratios. We also investigate theoretical properties and propose exact algorithms for solving: (1) PMCLP-PC-QoS where demand and service zones are represented by axis-parallel rectangles (denoted by PMCLP-PCR-QoS), which also has applications in camera surveillance and satellite imaging; and (2) one dimensional PMCLP-PC-QoS with non-homogeneous QoS, which has applications in river cleanups. These results extend and strengthen the only known exact algorithm for PMCLP-PCR-QoS with fixed and same QoS in (Bansal & Kianfar, 2017). Based on the results of our computational experiments, we observe that these exact and approximation algorithms are computationally efficient and effective. |

## Approximating Sparse Semidefinite Programs – Honorable MentionPresenter: Kevin ShuAffiliation: Georgia Institute of TechnologyCo-authors: Grigoriy BlekhermanAbstract:
We analyze a relaxation for sparse semidefinite programs with the aim of reducing the memory required to solve these programs, while still guaranteeing a high quality solution. For a variety of cases, we obtain an approximation ratio of order 1+O(n/g^3), where g is size of the smallest cycle of size at least 4 in a graph encoding the sparsity pattern of the semidefinite program, and build methods for bounding the quality of this approximation in other cases. |

## Decomposition Methods for Global Solutions of MILPsPresenter: Kaizhao SunAffiliation: Georgia Institute of TechnologyCo-authors: Mou Sun and Dr. Wotao YinAbstract:
This work introduces two decomposition-based methods for two-block mixed-integer linear programs (MILPs), which break the original problem into a sequence of smaller MILP subproblems. The first method is based on the l1-augmented Lagrangian. The second method is based on the alternating direction method of multipliers. When the original problem has a block-angular structure, the subproblems of the first block have low dimensions and can be solved in parallel. We add reverse-norm cuts and augmented Lagrangian cuts to the subproblems of the second block. For both methods, we show asymptotic convergence to globally optimal solutions and present iteration upper bounds. Numerical comparisons with recent decomposition methods demonstrate the exactness and efficiency of our proposed methods. |

## Hyperbolic Relaxation of Locally PSD MatricesPresenter: Shengding SunAffiliation: Georgia Institute of TechnologyCo-authors: Grigoriy Blekherman, Santanu S. Dey, Kevin ShuAbstract:
We study the eigenvalue set of locally PSD matrices, where all k*k principal submatrices are PSD. The special case k=2 is the second-order cone relaxation of PSD cone. We showed that the eigenvalues of any locally PSD matrix must lie in the corresponding hyperbolicity cone of elementary symmetric polynomial, and proved necessary and sufficient condition for its tightness. This relaxation allows us to minimize the smallest eigenvalue on locally PSD matrices subject to trace or Frobenius norm normalization. |

## A Network-based Column Generation Approach for Volumetric-Modulated Arc TherapyPresenter: Amirhossein VaeztehraniAffiliation: University of WaterlooCo-authors: Dr. Houra MahmoudzadehAbstract:
Cancer patients often undergo RT as part of their treatment. The goal in RT is to deliver a prescribed dose to the tumor while sparing the healthy organs. VMAT is an RT tool in which the beam rotates around the patient while changing the MLC shape to modulate the radiation. We model the VMAT problem as a MIP to optimize the beam shapes and intensities. As the problem is extremely large, it is computationally hard to solve. We design a fast network-based CG algorithm to solve the problem. |

## Approximation Algorithms for the Generalized Incremental Knapsack ProblemPresenter: Lingyi ZhangAffiliation: Columbia UniversityCo-authors: Yuri Faenza, Danny Segev |

## Shattering Inequalities for Learning Optimal Decision TreesPresenter: Zachary ZhouAffiliation: University of Wisconsin-MadisonCo-authors: Justin J. Boutilier, Carla Michini
Abstract:
Decision trees are among the most popular techniques for interpretable machine learning. In addition to their use as a standalone method, decision trees form the foundation for several more sophisticated machine learning algorithms, such as random forest. Heuristic algorithms such as CART can efficiently learn decision trees, but offer no optimality guarantees due to their myopic nature. Recently, mixed-integer programming (MIP) techniques have been applied to learn optimal decision trees, which have been empirically shown to achieve better out-of-sample performance than heuristic approaches.
Most MIP formulations contain two key components: 1) a framework to model the routing of datapoints through the decision tree to leaf nodes, and 2) a framework that properly constructs the tree by devising branching rules at each branch node. Typically, big-M constraints are used to unify these two components under a single MIP, yielding weak linear programming (LP) relaxations and consequently slow runtimes. An exception is the case where the dataset contains only binary valued features and and the branching rules are univariate, meaning that the tests performed at each branch node of the tree involve only a single feature. Our goal is to efficiently compute decision trees with multivariate branching rules, that perform tests involving several features. Moreover, we wish to consider datasets with numerical features, rather than restrict to datasets with binary or categorical features. We remark that alternative approaches based on itemset mining, satisfiability solvers, and constraint programming do not easily extend to multivariate branching rules and/or numerical datasets. Our main contribution is a new class of valid inequalities for learning optimal multivariate decision trees. Each inequality encodes an inclusion-minimal set of datapoints that cannot be shattered by any multivariate branching rule. We also show that these inequalities are sparse, involving at most the number of features plus two variables. We leverage these valid inequalities within a Benders-like decomposition, where the master problem determines how to route each datapoint to a leaf node to minimize misclassification error, and the subproblem checks whether it is possible to construct multivariate branching rules that realize the given routing. Specifically, for each branch node of the decision tree, we solve an LP feasibility subproblem that determines whether there is an hyperplane that can separate the datapoints arriving at the node as prescribed by the given routing; if not, then one of our valid inequalities is added to the master problem as a feasibility cut. Our approach does not require big-M constraints, but generates sparse cuts that capture the combinatorial structure of the problem to strengthen the LP relaxation and decrease training time. Preliminary numerical experiments using classic benchmark instances show that our decomposition achieves a significant improvement in training time while maintaining and sometimes even improving out-of-sample performance in comparison to other approaches from the literature |