For high performance computing using distributed memory architecture, MPI is the de-facto standard. To achieve high system performance the MPI communication routines have to be optimized. This can be done by tuning the runtime parameters. But, to find the optimal values for the important runtime parameter is a challenging task. Several hundred runs are required and the parameter values found are specific to a particular input. In this study, certain standard benchmarks are used to overcome this problem so that the optimal values found for the parameters can be used for other similar applications. Two heuristic algorithms: Genetic algorithm and Simulated Annealing algorithm are used to find the optimal MPI runtime parameter values. It is proved to have significantly reduced the time and effort in predicting the parameters. A comparison is made among two algorithms and also among variations in Genetic algorithm based on performance gain obtained using optimal runtime parameter values with respect to default MPI parameter values.
T. Satish Kumar, S. Sakthivel and M. Manjunatha Swamy. Optimizing MPI Communication Using Heuristic Algorithms.
DOI: https://doi.org/10.36478/ajit.2014.700.706
URL: https://www.makhillpublications.co/view-article/1682-3915/ajit.2014.700.706