Showing 1 - 4 of 4 Items

Sensitivity Analysis of Basins of Attraction for Nelder-Mead

Date: 2022-01-01

Creator: Sonia K. Shah

Access: Open access

The Nelder-Mead optimization method is a numerical method used to find the minimum of an objective function in a multidimensional space. In this paper, we use this method to study functions - specifically functions with three-dimensional graphs - and create images of the basin of attraction of the function. Three different methods are used to create these images named the systematic point method, randomized centroid method, and systemized centroid method. This paper applies these methods to different functions. The first function has two minima with an equivalent function value. The second function has one global minimum and one local minimum. The last function studied has several minima of different function values. The systematic point method is a reliable method in particular scenarios but is extremely sensitive to changes in the initial simplex. The randomized centroid method was not found to be useful as the basin of attraction images are difficult to understand. This made it particularly troublesome to know when the method was working effectively and when it was not. The systemized centroid method appears to be the most precise and effective method at creating the basin of attraction in most cases. This method rarely fails to find a minimum and is particularly adept at finding global minima more effectively compared to local minima. It is important to remember that these conclusions are simply based off the results of the methods and functions studied and that more effective methods may exist.


Sensitivity Analysis of Basins of Attraction for Gradient-Based Optimization Methods

Date: 2022-01-01

Creator: Gillian King

Access: Open access

This project is an analysis of the effectiveness of five distinct optimization methods in their ability in producing clear images of the basins of attraction, which is the set of initial points that approach the same minimum for a given function. Basin images are similar to contour plots, except that they depict the distinct regions of points--in unique colors--that approach the same minimum. Though distinct in goal, contour plots are useful to basin research in that idealized basin images can be inferred from the steepness levels and location of extrema they depict. Effectiveness of the method changes slightly depending on the function, but is generally defined as how closely the basin image models contour information on where the true minima are located, and by the clarity of the resulting image in depicting well-defined regions. The methods are tested on four distinct functions which were chosen to assess how each method performs in the presence of various challenges. This project ranks the five methods for their overall effectiveness and consistency across the four functions, and also analyzes the sensitivity of the methods when small changes are made to the function. In general, less sensitive and consistently effective methods are more applicable and reliable in applied optimization research.


Basins of Attraction and Metaoptimization for Particle Swarm Optimization Methods

Date: 2024-01-01

Creator: David Ma

Access: Open access

Particle swarm optimization (PSO) is a metaheuristic optimization method that finds near- optima by spawning particles which explore within a given search space while exploiting the best candidate solutions of the swarm. PSO algorithms emulate the behavior of, say, a flock of birds or a school of fish, and encapsulate the randomness that is present in natural processes. In this paper, we discuss different initialization schemes and meta-optimizations for PSO, its performances on various multi-minima functions, and the unique intricacies and obstacles that the method faces when attempting to produce images for basins of attraction, which are the sets of initial points that are mapped to the same minima by the method. This project compares the relative strengths and weaknesses of the Particle Swarm with other optimization methods, namely gradient-descent, in the context of basin mapping and other metrics. It was found that with proper parameterization, PSO can amply explore the search space regardless of initialization. For all functions, the swarm was capable of finding, within some tolerance, the global minimum or minima in fewer than 60 iterations by having sufficiently well chosen parameters and parameterization schemes. The shortcomings of the Particle Swarm method, however, are that its parameters often require fine-tuning for different search spaces to most efficiently optimize and that the swarm cannot produce the analytical minimum. Overall, the PSO is a highly adaptive and computationally efficient method with few initial restraints that can be readily used as the first step of any optimization task.


GEM-PSO: Particle Swarm Optimization Guided by Enhanced Memory

Date: 2019-05-01

Creator: Kevin Fakai Chen

Access: Open access

Particle Swarm Optimization (PSO) is a widely-used nature-inspired optimization technique in which a swarm of virtual particles work together with limited communication to find a global minimum or optimum. PSO has has been successfully applied to a wide variety of practical problems, such as optimization in engineering fields, hybridization with other nature-inspired algorithms, or even general optimization problems. However, PSO suffers from a phenomenon known as premature convergence, in which the algorithm's particles all converge on a local optimum instead of the global optimum, and cannot improve their solution any further. We seek to improve upon the standard Particle Swarm PSO algorithm by fixing this premature convergence behavior. We do so by storing and exploiting increased information in the form of past bests, which we deem enhanced memory. We introduce three types of modifications to each new algorithm (which we call a GEM-PSO: Particle Swarm Optimization, Guided by Enhanced Memory, because our modifications all deal with enhancing the memory of each particle). These are procedures for saving a found best, for removing a best from memory when a new one is to be added, and for selecting one (or more) bests to be used from those saved in memory. By using different combinations of these modifications, we can create many different variants of GEM-PSO that have a wide variety of behaviors and qualities. We analyze the performance of GEM-PSO, discuss the impact of PSO's parameters on the algorithms' performances, isolate different modifications in order to closely study their impact on the performance of any given GEM-PSO variant, and finally look at how multiple modifications perform. Finally, we draw conclusions about the efficacy and potential of GEM-PSO variants, and provide ideas for further exploration in this area of study. Many GEM-PSO variants are able to consistently outperform standard PSO on specific functions, and GEM-PSO variants can be shown to be promising, with both general and specific use cases.