Two approaches of Auto-tuning(AT) are presented in this talk. First, we present an approach based on traditional approach to perform loop transformations to numerical computation kernels from scientific applications, such as seismic wave analysis and plasma simulation. By adapting the loop transformations, such as loop collapse and loop split, we can establish crucial speedups in current many-core architectures, such as the Intel Xeon Phi (KNL). Preliminary result with the Oakforest-PACS by adapting an AT language, named ppOpen-AT, will be shown. Second, we present an approach based on novel approach to performs parameter optimization by deep-learning (DL). We focus on AT of parameter tunings for numerical libraries, in particular, preconditioner selection for sparse iterative solvers. A method of AT with DL for utilizing a feature image of input sparse matrices is proposed to predict the best computation method of sparse matrix-vector multiplications and the best algorithm of preconditioners. Preliminary result of evaluation of the proposed AT method will be shown with more than 1000 kinds of matrices from the SuiteSparse Matrix Collection.