Researchers at ETH Zurich develop the fastest possible flow algorithm – Technology Org

Researchers at ETH Zurich develop the fastest possible flow algorithm – Technology Org


Rasmus King has written an almost complete algorithm. It calculates the maximum transport flow at minimum cost for any type of network – be it rail, road or electricity – at a speed that is mathematically impossible to beat.

Image credit: Rawpixel, CC0 Public Domain

In a breakthrough reminiscent of Lucky Luke – the man who shoots a bullet faster than his shadow – Rasmus King and his team have developed a superfast algorithm that is set to change an entire field of research. King’s team’s groundbreaking work involves what are known as network flow algorithms, which tackle the question of how to achieve maximum flow in a network while simultaneously minimizing transportation costs.

Imagine you are using the European transport network and looking for the fastest and cheapest route to move as much luggage as possible from Copenhagen to Milan. In such cases King’s algorithm can be applied to calculate the optimal, lowest-cost traffic flow for any type of network – be it rail, road, water or the Internet. Their algorithm performs these calculations so fast that it can provide a solution the moment the computer reads the data describing the network.

The larger a network, the faster the calculations

Before King, no one had managed to do this – even though researchers have been working on the problem for nearly 90 years. Previously, calculating the optimal flow took much more time than processing the network data. And as networks grew larger and more complex, the computing time required grew exponentially compared to the actual size of the computing problem. This is why we also see flow problems in networks that are so large that even a computer becomes difficult to calculate.

King’s approach eliminates this problem: using his algorithm, computing time and network size grow at the same rate – somewhat like going on a hike and constantly maintaining the same speed, no matter how difficult the path. Why not to be. A look at the raw data shows how far we’ve come: By the beginning of the millennium, no algorithm had managed to perform calculations faster than M1.5Where? M This means the number of connections in the network that the computer has to count, and the network data only needs to be read once. M Time. In 2004, the computing speed required to solve the problem was successfully reduced M1.33, Using Kyng’s algorithm, the “extra” computing time required to reach the solution after reading the network data is now negligible.

Like a Porsche racing on a horse-drawn carriage

ETH Zurich researchers have thus developed what is, in theory, the fastest possible network flow algorithm. Two years ago, King and his team presented a mathematical proof of their concept in a groundbreaking paper. Scientists refer to these novel, nearly optimally fast algorithms as “near-linear-time algorithms”, and the community of theoretical computer scientists reacted to King’s success with a mixture of surprise and excitement.

King’s doctoral supervisor, Daniel A. Spielman, professor of applied mathematics and computer science at Yale and himself a pioneer and leader in the field, compared the “absurdly fast” algorithm to a Porsche overtaking a horse-drawn carriage. Along with winning the 2022 Best Paper Award at the IEEE Annual Symposium on Foundations of Computer Science (FOCS), their paper was also highlighted in Computing Journal. Communications of the ACMand editor of Popular Science magazine Quanta King’s algorithm is named one of the external pageThe ten biggest discoveries in computer science in 2022,

ETH Zurich researchers have since refined their approach and developed near-linear-time algorithms. For example, the first algorithms were still focused on stable, static networks whose connections are directed, meaning they function like one-way streets in urban road networks. Algorithms published this year are now also able to calculate optimal flows for networks that change slowly over time. Lightning-fast computation is an important step forward in dealing with highly complex and data-rich networks that change dynamically and very quickly, such as molecules or the brain in biology, or human friendships.

Lightning fast algorithm for changing networks

On Thursday, Simon Meyerhans, a member of King’s team, presented a new nearly-linear-time algorithm at the annual ACM Symposium on Theory of Computing (STOC) in Vancouver. This algorithm solves the minimum-cost maximum-flow problem for networks that change incrementally as new connections are added. Additionally, in a second paper accepted by the IEEE Symposium on Foundations of Computer Science (FOCS) in October, ETH researchers developed another algorithm that also handles deleted connections.

Specifically, these algorithms identify shortest routes in the network where connections are added or removed. In real-world transport networks, examples of such changes in Switzerland include the complete closure and then partial reopening of the Gotthard Base Tunnel in the months after summer 2023, or the closure of the A13 motorway by a recent landslide. Part destroyed, which is the main option. Route of the Gotthard Road Tunnel.

Faced with such changes, how does a computer, an online map service or a route planner calculate the lowest cost and fastest connection between Milan and Copenhagen? Kyng’s new algorithms also compute the optimal route for these incremental or incrementally changing networks in near-linear time – so fast that the computing time for each new connection, whether added through re-routing or Added through construction of new routes, again negligible.

But what exactly is it that makes King’s approach to computation so much faster than any other network flow algorithm? In principle, all computational methods face the challenge of analyzing the network in multiple iterations to find the optimal flow and minimum cost path. In doing so, they run through each of the different variants of which connections are open, closed or congested because they have reached their capacity limits.

Calculate the total? Or parts thereof?

Before King, computer scientists had to choose between two major strategies for solving this problem. One of these was based on the railway network and involved calculating an entire section of the network with the modified flow of traffic at each iteration. The second strategy – inspired by the electric flow in the power grid – calculated the entire network in each iteration but used statistical mean values ​​for the modified flow of each section of the network to make the calculation faster.

King’s team has now combined the respective benefits of these two strategies to create a fundamentally new combined approach. “Our approach is based on many small, efficient and low-cost computational steps, which – taken together – are much faster than a few larger steps,” says Maximilian Probst Gutenberg, a lecturer and member of the King group, who led a Played an important role. In developing near-linear-time algorithms.

A brief look at the history of this discipline adds an additional dimension to the significance of King’s success: flow problems in networks were one of the first problems to be systematically solved with the help of algorithms in the 1950s, and flow algorithms Played an important role in the establishment. Theoretical computer science as a field of research in its own right. Mathematician Lester R. Ford Jr. and Delbert R. The famous algorithm developed by Fulkerson also stems from this period. Their algorithm efficiently solves the maximum-flow problem, which seeks to determine how to transport as much goods as possible through the network without exceeding the capacity of individual routes.

fast and comprehensive

These advances showed researchers that the maximum-flow problem, the minimum-cost problem (transshipment or transportation problem) and many other important network-flow problems can all be viewed as special cases of the general minimum-cost flow problem. Before King’s research, most algorithms were able to solve only one of these problems efficiently, although they could not do this particularly quickly, nor could they be extended to the broader minimum-cost flow problem. The same applies to the pioneering flow algorithms of the 1970s, for which theoretical computer scientists John Edward Hopcroft, Richard Manning Karp, and Robert André Tarzan received the Turing Award, considered the “Nobel Prize” of computer science. Carp acquired them in 1985; Hopcroft and Tarzan achieved their victory in 1986.

Change in perspective from railways to electricity

It was not until 2004 that mathematicians and computer scientists Daniel Spielman and Shang-Hua Teng – and later Samuel Deitch – succeeded in writing an algorithm that also provided a fast and efficient solution to the minimum cost flow problem. This was the group that focused on power flow in the power grid. Their switch from a railway to electricity perspective created an important mathematical difference: if a train is rerouted onto the railway network because a line is out of service, the next best route according to the timetable is already on a different route. Can be captured by train. In the electricity grid, it is possible to partially divert the electrons forming the electric current towards a network connection through which other current is already flowing. Thus, unlike trains, in mathematical terms, electric current can be transferred “partially” to a new connection.

This partial redirection enabled Spielman and his colleagues to calculate such route changes much faster and, at the same time, recalculate the entire network after each change. “We rejected Spielman’s vision of building the most powerful algorithm for the entire network,” says King. “Instead, we applied their idea of ​​partial route calculation to the earlier approaches of Hopcroft and Karp.” This calculation of partial routes in each iteration played a major role in speeding up the overall flow calculation.

A turning point in theoretical principles

Much of the progress ETH Zurich researchers make depends on their decision to extend their work beyond the development of new algorithms. The team also uses and designs new mathematical tools that speed up their algorithms even more. In particular, they have developed a new data structure to organize network data; This makes it possible to identify any changes in the network connection very quickly; This, in turn, helps make algorithmic solutions amazingly fast. With so many applications for tools like near-linear-time algorithms and new data structures, the overall innovation spiral may soon turn much faster than before.

Yet laying the foundation for solving very large problems that previously could not be calculated efficiently is only one benefit of these important fast-flow algorithms – because they also change the way computers calculate complex tasks. Are. “Over the past decade, there has been a revolution in the theoretical foundations for obtaining provably fast algorithms for fundamental problems in theoretical computer science,” external pageAn international group writes Of researchers from the University of California, Berkeley, which includes among its members Rasmus King and researcher Deeksha Adil of the Institute for Theoretical Studies at ETH Zurich.

Source: ETH Zurich