QR factorization is one of the important computation and used in various numerical simulations. Many large-scale simulations using huge-size matrices require both large-size memory and long-time computation. Low-rank approximation methods are expected to decrease both of them. Block low-rank (BLR) matrices is one of the low-rank approximation methods and QR factorization of BLR is already implemented. We are trying to implement it on GPU. In this talk, we show the current status of it.