Springer papers

Authors

Marjan Gusev , Sasko Ristov and Goran Velkoski

Abstract

Multiplication of huge matrices generates more cache misses than smaller matrices. 2D block decomposition of matrices that can be placed in L1 CPU cache decreases the cache misses since the operations will access data only stored in L1 cache. However, it also requires additional reads, writes, and operations compared to 1D partitioning, since the blocks are read multiple times. In this paper we propose a new hybrid 2D/1D partitioning to exploit the advantages of both approaches. The idea is first to partition the matrices in 2D blocks and then to multiply each block with 1D partitioning to achieve minimum cache misses. We select also a block size to fit in L1 cache as 2D block decomposition, but we use rectangle instead of squared blocks in order to minimize the operations but also cache associativity. The experiments show that our proposed algorithm outperforms the 2D blocking algorithm for huge matrices on AMD Phenom CPU.

Keywords

CPU Cache Multiprocessor Matrix Partitioning