Simulated annealing is one of the most elegant algorithms that belongs to the family of meta heuristics. It is quite easy to understand, simple to implement and run. It also has a very small memory footprint which basically means that it consumes a very small amount of CPU memory while running. This is specially considerable when we compare it to other compute intensive algorithms like genetic algorithms and genetic programming.
Like many meta heuristic algorithms simulated annealing has been drawn as an analogy from materials science. The word annealing means to soften up something. In materials science and specially in metallurgical engineering the term annealing is basically used in the context of heat treatment of alloys. More specifically annealing is a process in which a metal or an allow is cooled down from a very high temperature at a very slow rate. The metal may be in a molten state. Cooling at slow rate allows the metal to form coarse grains and to settle in the states of lower energy. Although such grains can be observed only under a microscope, the result is that the metal becomes soft due to annealing. Hence the term annealing. The opposite of annealing is quenching. In this the metal is cooled at a very fast rate, preferably by soaking it in water. Hence the name quenching. Cooling at a much faster rate does not allow the grains of the metal to become bigger and to acquire lower energy levels. Hence, the metal becomes hard or brittle, or acquires these kind of properties.
In simulated annealing algorithm tries to look for a globally optimal solution for a problem just like other meta heuristic algorithms. In the context of numerical optimization it is normally sought to find a global minimum in a search space of candidate solutions. In order to achieve this the algorithm initially picks an initial solution randomly. After that it iteratively chooses better solutions in order to descend through the search space. The algorithm also allows itself to randomly choose bad solutions at times in the hope of hitting a trajectory on the search space that may lead to a globally optimal solution. The concern that how much should it be allowed to choose bad solutions depends on a term called the annealing temperature of the algorithm.
The annealing temperature is initially chosen to have a high value. As the iterations proceed, the annealing temperature keeps on decreasing so as to restrict the use of bad solutions. The overall working of the algorithm can be explained with the help of the following pseudo-code.
Choose a solution S for the problem randomly.
Set the annealing temperature T to some sensible value.
Pick another candidate solution S’ .
If S’ is better than S then set S=S’. That is choose S’ to be the new choice for a possible solution.
If S’ is not better than S then choose S‘ as the new solution with a certain probability that is a function of the annealing temperature T.
Repeat from step 3 till this point until a certain desired solution is met, or a certain number of iterations have elapsed or a certain user defined objective has been met.
Terminate the algorithm and keep S as the desired solution.