Task Bench: A Parameterized Benchmark for Evaluating Parallel Runtime Performance
Machine Learning, Deep Learning and Artificial Intelligence
Requirements, Performance, and Benchmarks
Reliability and Resiliency
TimeWednesday, 18 November 20201:30pm - 2pm EDT
DescriptionWe present Task Bench, a parameterized benchmark designed to explore the performance of distributed programming systems under a variety of application scenarios. Task Bench dramatically lowers the barrier to benchmarking and comparing multiple programming systems by making the implementation for a given system orthogonal to the benchmarks themselves: every benchmark constructed with Task Bench runs on every Task Bench implementation. Furthermore, Task Bench’s parameterization enables a wide variety of benchmark scenarios that distill the key characteristics of larger applications.
To assess the effectiveness and overheads of the tested systems, we introduce a novel metric; minimum effective task granularity (METG). We conduct a comprehensive study with 15 programming systems on up to 256 Haswell nodes of the Cori supercomputer. Running at scale, 100μs-long tasks are the finest granularity that any system runs efficiently with current technologies. We also study each system’s scalability and ability to hide communication, and mitigate load imbalance.