Example: 'Minimum bottleneck utilization routing using a tabu search heuristic'

Brief description

Given a set of nodes and links, gith their capacities, an offered traffic, and a set of admissible paths for each demand (given by the k-shortest paths in km for each demand, being k a user-defined parameter), this algorithm uses a tabu search metaheuristic to find for each demand a link-disjoint 1+1 routes, so that the network congestion is minimized. For each demand, the candidate 1+1 pairs are computed as all the link disjoint pairs p1,p2, such that p1 and p2 are admissible paths.

Algorithm description table

Algorithm inputs

Requires a topology (nodes and links) and a demand set within the netPlan object.

Algorithm parameters:

  • k: Number of candidate paths per demand. Default: 10
  • ts_maxNumIt: Number of iterations in the outer loop. Default: 1000
  • ts_tabuListTenureFraction: 0.1. Default: Size of the tabu list as a fraction of the number of demands
  • ts_numItNonImprovingToRandomize: Number iterations non improving the best solution, which imply a large randomizatio. Default: 10
Algorithm outputsThe routes and path protection segments are added to netPlan object. Any previous routes are removed.
Required librariesNone
KeywordsFlow assignment (FA), Protection algorithm, Tabu search (TS)
AuthorsPablo Pavón Mariño, José Luis Izquierdo Zaragoza
DateMarch 2013
CodeFA_TS_minCongestion11.java

Detailed description

Two solutions are considered neighbors if they differ in the routing of one demand (either primary path, backup path or both). The number of iterations is a user-defined parameter. The tabu list size is a user-defined parameters, defined as a fraction of the number of demands. If the current solution is not improved during a number of consecutive iterations (a user-defined parameter), the algorithm produces a randomization of the current solution based to jump to a different solution of the search space. This randomization makes use of a long-term memory structure. This structure counts for each 1+1 pair p, the number of times that introducing this path in a solution, meant a solution improvement. With this, we intend to capture the paths that more often have produced improving results, and use this information to tune the randomization jumps.