Authors: Karim Youssef (Virginia Tech, Lawrence Livermore National Laboratory); Keita Iwabuchi (Lawrence Livermore National Laboratory); Wu-chun Feng (Virginia Tech); and Maya Gokhale and Roger Pearce (Lawrence Livermore National Laboratory)
Abstract: The exponential growth in data set sizes across multiple domains creates challenges in terms of storing data efficiently as well as performing scalable computations on such data. Memory mapping files on different storage types offer a uniform interface as well as programming productivity for applications that perform in-place or out-of-core computations. Multi-threaded applications, however, incur an I/O contention on mapped files, hampering their scalability. Also, many applications handle sparse data structures, rendering storage efficiency a desirable feature. To address these challenges, we present SparseStore, a tool for transparently partitioning a memory-mapped persistent region into multiple files with dynamic and sparse allocation. We provide SparseStore as a part of UMap, a user-space page management tool for memory mapping. Our experiments demonstrated that using UMap with SparseStore yielded up to 12x speedup compared to system-level mmap, and up to 2x speedup compared to the default UMap that maps a single file.
Best Poster Finalist (BP): no
Poster summary: PDF
Back to Poster Archive Listing