Challenges for Iterative Methods on Extreme Scale Very Sparse Matrices Exascale machines are now available, based on several different arithmetic (from 64-bit to 16-32 bit arithmetic, including mixed versions and some that are no longer IEEE compliant) and using different architectures. Brain-scale applications, from machine learning and AI for example, manipulate huge graphs or meshes that lead to very sparse nonsymmetrical linear algebra problems. Recent applications generate data which allow to distribute along supercomputer nodes a few compressed rows or columns of extreme scale and very sparse matrices but which don't allow to store any dense vector of the same order on each node, as we usually expected to compute distributed matrix-vector products (even using MapReduce). Moreover, some new processors have networks on chip to interconnected some subsets or cores which don't share memories anymore, associated to distributed vector facilities. In this talk, after a short description of these recent evolutions having important impacts on our results, in particular about parallel and distributed iterative methods, I present some results obtained on the still #1 supercomputer of the HPCG list, Fugaku, for two important iterative methods : the PageRank and the Conjugate Gradient methods, based on sequences of sparse matrix products optimized for very irregular sparse and very large matrices, with respect to the sparsity and the size of the matrices, on the one hand, and to the number of process and nodes, on the other hand. I conclude proposing some research perspectives and potential collaborations.