Abstract: The performance penalty of page table walks after TLB misses is serious for modern computer systems. Particularly, it is more severe while processing emerging big data applications because they generally experience TLB misses more frequently due to larger memory footprint and less access locality. To execute such workloads more efficiently, it is necessary to revisit the management of caches. This is because caches are accessed during page table walks to fetch Page Table Entries (PTEs) but not optimized accordingly in current systems. In this talk, Dr. Arima will introduce their recent proposal that attempts to optimize the allocations of PTEs in caches. More specifically, the scheme optimizes the eviction priorities for PTEs and data at each level of the cache hierarchy to maximize performance.