A few months ago I wrote a post about my interest in Ant Colony Optimization (ACO) and my goal to add it to my metaheuristic bag of tricks. I finally was able to code up a version of an ACO algorithm and start testing it for the Traveling Salesman Problem (TSP). ACO was initially developed to find the optimal path through a graph so solving TSPs is a natural first test. I have been testing using symmetric TSP problem instances from TSPLIB (mostly problems pcb442.tsp and st70.tsp) and comparing not only against the optimal solution (all symmetric TSP problem instances on TSPLIB now have a known optimal solution!) but also against the results of a Random Keys genetic algorithm. So far I have been impressed by the ACO’s performance, particularly compared with the GA.

For the TSPLIB problem st70.tsp I ran the GA and ACO algorithms 20 times each and calculated the overall best solution found by each algorithm as well as the average and max solutions found. The optimal solution for this problem is 675 so the GA and ACO find good values but neither of them found the optimal value. Of course, each ran for only 45 seconds on average. I assume that it took longer than 45 seconds to find the optimal value to this 70-city TSP problem.

Algorithm | Best Solution | Avg Solution | Max Solution | Avg Run Time (secs) |

GA | 678 | 723 | 773 | 45.2 |

ACO | 731 | 744 | 754 | 47.6 |

I also graphed the 20 solutions found by each algorithm:

The GA found the best solution overall and had a better average solution, but the ACO was more consistent with less variation between solutions and a better “worst” solution than the GA. For problems where the algorithm cannot run several times before outputting a solution, the ACO may be a better choice. Also the ACO is a brand-new algorithm for me and there is plenty of room for growth in its implementation. The GA used in this comparison includes a new operator I wrote specifically for TSP problems, which significantly improved its performance over a standard Random Keys GA, and similar improvements can be made to my basic ACO for solving TSPs as well as other types of problems.

A quick note about parameters: GAs are very sensitive to the values chosen for their parameters, like the percentage of the population that is generated through reproduction vs. crossover vs. immigration/mutation. During initial testing I found that ACOs can also be very sensitive to the values chosen for their constants. I noticed when tweaking just a single constant (the constant that controls the contribution that the distance between cities makes towards choosing a next point) that the best solution found was heavily dependent upon the value of this constant. For the TSPLIB problem pcb442.tsp (optimal value = 50,778) setting the constant to 2 resulted in a best solution with an objective of 206,887. Setting the constant to 4 resulted in a best solution with an objective of 79,312! That’s a big difference for changing just a single constant. Like other metaheuristics, GAs included, performance tuning is required when tackling a new problem with an ACO.

Just tried my vanilla Tabu Search TSP configuration for 45 seconds on st70.tsp and pcb442.tsp, to compare.

For st70.tsp, I get 691 (better then your average 723, but worse than your best 678).

For pcb442.tsp, I get 58 790 (better than your best 79 312).

Of course, this isn’t a fair comparison, as our hardware differs.

My config is with a very vanilla solver config, I haven’t tried with a using smarter moves (such as subChainChange or Swap without or without reversing), which should give better results.