Samyam Rajbhandari is a researcher at Microsoft with a background in High Performance Computing. He works on developing high performance infrastructures for accelerating large scale deep learning training and inference on parallel and distributed systems. His research results are used in multiple Microsoft systems and products, such as Bing, Ads, AzureML to improve performance and capacity. He developed the core technology in DeepCPU and DeepSpeed resulting in order(s)-of-magnitude improvement on speed and scale for DL inference and training. His recent work on memory optimization has enabled training of very large models including the 17.2B Turing-NLG model from Microsoft. Samyam received his PhD in Computer Science from Ohio State University.
Machine Learning, Deep Learning and Artificial Intelligence