When it comes to optimizing the deployment of finite resources in real-world domains, anything less than perfection comes at a price. From agriculture, to e-commerce, to transportation, industries are compelled to optimize everything from processes to personnel — finding the most efficient solution while satisfying a set of constraints that may be at odds with one another.

The optimum solution, then, often requires tradeoffs. But there are some tradeoffs that you just can’t make; it’s physically impossible to assign a quarter of a tractor to a field or half a truck to a route — never mind a third of a flight attendant to an aircraft. Problems such as these, where the only optimal solutions must make use of the whole, call for integer programming.

The integer programming problem, as presented by Karp in his classic 1972 paper, is NP-hard, and the algorithms for solving it run in exponential time. But there are many applications, such as the trio of examples above, that involve a fixed number of variables and thus call for a more limited solution. After all, a farm only has so many acres. For practical reasons, many domains will therefore rely on heuristics that perform well enough but don’t offer the formal guarantees of an exact algorithmic solution.

But in the decades that followed the release of Karp’s seminal work, researchers have attempted to make progress toward such a solution — and several succeeded. In 1983, mathematician Hendrik Lenstra offered an algorithm with a runtime of 2^{O(n^3)}; four years later, Ravi Kannan and the duo András Frank and Éva Tardos arrived at an n^{O(n)}-time result. After that, progress stalled for more than 30 years.

During that time, as explained in a Simons Foundation article last summer, it remained an open question whether the previous work could be improved upon to exp(O(n)), which would be the fastest achievable under the exponential-time hypothesis. Recently, a duo from the University of Washington — Allen School Ph.D. student Victor Reis and professor Thomas Rothvoss — put such speculation to rest by producing the first (log n)^{O(n)}-time algorithm for solving integer programming problems within a fixed set of variables.

The pair were motivated to tackle the question after studying the Subspace Flatness Conjecture proposed by Daniel Dadush in 2012, which itself drew upon work by Ravi Kannan and László Lovász in the late 1980’s. Far from falling flat, Reis and Rothvoss’ work earned a Best Paper Award at the 64th IEEE Symposium on Foundations of Computer Science (FOCS 2023) last November in Santa Cruz, California.

“Dadush observed that Kannan and Lovász’ result could be turned into a recursive algorithm for solving integer programs with a run-time of n^n, which yielded a modest improvement over the 1987 result,” explained Rothvoss, who holds a joint appointment in the University of Washington Department of Mathematics. “He further posited that the factor of n could be replaced by a smaller O(\log(n)) term, which would lead to an even more significant improvement. We proved that conjecture to be correct, up to a constant in the exponent.”

To do so, he and Reis employed a combination of well-established and relatively new techniques. One of the tools they were able to draw upon was what Rothvoss termed a “rather deep” result from asymptotic convex geometry known as the ℓ position for symmetric convex bodies, which emerged shortly before the search for a faster integer programming algorithm began in earnest. As described by the duo Tadeusz Figiel and Nicole Tomczak-Jaegermann and by Gilles Pisier, the result stipulates that any symmetric convex set can be linearly transformed to behave, in a certain sense, like a Euclidean ball. This property would prove important to the question at hand when, decades later, Oded Regev and Noah Stephens-Davidowitz produced their Reverse Minkowski Theorem. That theorem, which Rothvoss and Reis also put to good use, implied the correctness of Dadush’s Subspace Flatness Conjecture — assuming the convex set represented in the algorithm is a Euclidean ball.

But whereas the ℓ position allows that high-dimensional slices and projections will have a similar volume as for the ball, there is a huge discrepancy when it comes to low-dimensional slices and projections — precisely that which, according to the work of Kannan and Lovász, one would want to solve for using integer programming. This gave the UW duo a problem to overcome.

“The Euclidean property at the lower dimension was much too weak for Thomas and I to directly apply the work of Regev and Stephens-Davidowitz,” Reis noted. “But we realized we could apply their theorem to the n/2-dimensional sublattice and then recurse on the rest. That gave us a bound for the Subspace Flatness Problem that translated to a significantly improved algorithm for integer programming.”

At present, the result is — you guessed it — theoretical; while it created a buzz in the research community, the duo’s algorithm is unlikely to be put into practice. At least, not yet.

“Sorry, you’re not likely to get your packages any faster based on our paper, but our method could inform the development of future heuristics and open up new questions for us to pursue,” said Rothvoss. “Speaking for myself, I’m perfectly happy with this result — but I know Victor doesn’t like that the exponent isn’t tight.”

“I think it would be really interesting to reduce the original Subspace Flatness Conjecture, with a single logarithmic factor, to other well-studied problems in high-dimensional geometry such as the KLS Conjecture,” Reis said. “There is also the question of whether there could be other approaches for integer programming that don’t rely on subspace flatness. For example, we still don’t know whether the problem can be solved in simply exponential time.”

Read the research paper here and a related Quanta magazine story here.