Memory management is a fundamental problem in computer architecture and operating systems. We consider a two-level memory system with fast, but small cache and slow, but large main memory. The underlying theoretical problem is known as the paging problem. A sequence of requests to pages has to be served by making each requested page available in the cache. A paging strategy replaces pages in the cache with requested ones. The aim is to minimize the number of page faults that occur whenever a requested page is not in the cache.
Experience shows that the Least-Recently-Used (LRU) paging strategy usually achieves a factor around 3 compared to the optimum number of faults. This contrasts the theoretical worst case, in which this factor can be as large as the cache size k.
In this talk we present a general lower bound for the minimum number of faults, which provides insight into the global structure of a given request sequence. In addition, we derive a characterization for the number of faults incurred by LRU. Furthermore, we classify the set of all request sequences according to certain parameters and prove bounds on the competitive ratio of LRU, which depends on them. This bound varies between 3 and k, i.e., it includes the worst-case, but explains for which sequences LRU achieves constant competitive ratio. The classification is motivated from the structure of request sequences of practical applications: locality of reference and characteristic data access patterns.