The geophysical inversion of gravity anomalies is experimented using the application GMI based on the algorithm CLEAR, run in parallel systems. Parallelization is done using both OpenMP and MPI. The scalability in time domain of the iterative process of inversion is analyzed, comparing previously reported results based in OpenMP with recent data from tests with MPI. The runtime for small models was not improved with the increase of the number of processing cores. The increase of user runtime due to the size of the model resulted faster for MPI compared with OpenMP and for big models the latter would offer better runtime. Walltime scalability in multi-user systems did not improved with the increase of processing cores as result of time sharing. Results confirm the scalability of the runtime at the order of O(N8) relative to the linear size N of 3D models, while the impact of increasing the number of involved cores remains disputable when walltime is considered. Walltime upper limit for modest resolution 3D models with 41*41*21 nodes was 105 seconds, suggesting the need of using MPI in multi-cluster systems and of GPUs for better resolution. The results are in framework of FP7 Infrastructure project HP-SEE.
High performance computing gravity inversion OpenMP MPI