General matrix-matrix multiplications (GEMM) in vendor-supplied BLAS libraries are best optimized for square matrices but often show bad performance for tall & skinny matrices, which are much taller than wide. Nvidia's current CUBLAS implementation delivers only a fraction of the potential performance (as given by the roofline model) in this case. We describe the challenges and key properties of an implementation that can achieve perfect performance. We further evaluate different approaches of parallelization and thread distribution, and devise a flexible, configurable mapping scheme. A code generation approach enables a simultaneously flexible and specialized implementation with autotuning. This results in perfect performance for a large range of matrix sizes in the domain of interest, and at least 2/3 of maximum performance for the rest on an Nvidia Volta GPGPU.


    Access

    Download


    Export, share and cite



    Title :

    Performance Engineering for a Tall & Skinny Matrix Multiplication Kernel on GPUs


    Contributors:

    Conference:

    2019 ; Bialystok, Polen



    Publication date :

    2020


    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    German




    Performance engineering for real and complex tall & skinny matrix multiplication kernels on GPUs

    Ernst, Dominik / Hager, Georg / Thies, Jonas et al. | German Aerospace Center (DLR) | 2020

    Free access

    Skinny nozzle heats heat rap

    Automotive engineering | 1985


    FIBRE IBCs - Exim's skinny bag

    Online Contents | 2001