Opt-AiNet

From GM-RKB
Jump to navigation Jump to search

An Opt-AiNet is an optimization version of the AiNet System that implements an Opt-AiNet Algorithm to solve an Opt-AiNet Task.



References

2002

1. Randomly initialize a population of cells (the initial number of cells is not relevant)
2. While stopping criterion is not met do
2.1 Determine the fitness of each network cell and normalize the vector of fitnesses.
2.2 Generate a number [math]\displaystyle{ N_c }[/math] of clones for each network cell.
2.3 Mutate each clone proportionally to the fitness of its parent cell, but keep the parent cell. The mutation follows Eq. (1).
2.4 Determine the fitness of all individuals of the population.
2.5 For each clone, select the cell with highest fitness and calculate the average fitness of the selected population.
2.6 If the average error of the population is not significantly different from the previous iteration, then continue. Else, return to step 2.1
2.7 Determine the affinity of all cells in the network. Suppress all but the highest fitness of those cells whose affinities are less than the suppression threshold [math]\displaystyle{ \sigma_s }[/math] and determine the number of network cells, named memory cells, after suppression.
2.8 Introduce a percentage [math]\displaystyle{ d\% }[/math] of randomly generated cells and return to step 2.
3. EndWhile
The behavior of the new algorithm can be explained in a simple form. Steps 2.1 to 2.5: at each iteration, a population of cells is optimized locally through affinity proportional mutation (exploitation of the fitness landscape). The fact that no parent cell has a selective advantage over the others contributes to the multimodal search of the algorithm. Steps 2.6 to 2.8: when this population reaches a stable state (measured via the stabilization of its average fitness), the cells interact with each other in a network form, and some of the similar cells are eliminated to avoid redundancy. Also, a number of randomly generated cells is added to the current population (exploration of the fitness landscape) and the process of local optimization re-starts.

(...) The affinity proportional mutation of Step 2.3 is performed according to the following expression:

[math]\displaystyle{ c' = c + \alpha N(0,1)\quad }[/math] (1)

[math]\displaystyle{ \alpha = (1/\beta) \exp(−f^* ) }[/math],

where [math]\displaystyle{ c' }[/math] is a mutated cell [math]\displaystyle{ c }[/math], [math]\displaystyle{ N(0,1) }[/math] is a Gaussian random variable of zero mean and standard deviation [math]\displaystyle{ \sigma = 1 }[/math], [math]\displaystyle{ \beta }[/math] is a parameter that controls the decay of the inverse exponential function, and [math]\displaystyle{ f^* }[/math] is the fitness of an individual normalized in the interval [0,1]. A mutation is only accepted if the mutated cell [math]\displaystyle{ c' }[/math] is within its range of domain.