MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Operating System Services for High-Throughput Accelerators

Mark Silberstein
Technion, Israel Institute of Technology
SWS Colloquium

Mark Silberstein is an Assistant Professor at the Electrical Engineering Department, Technion, Israel. He truly believes that building practical, programmable and efficient computer systems with computational accelerators requires cross-cutting changes in system interfaces, OS design, hardware mechanisms, storage and networking services, as well as programming models and parallel algorithms, all of which constitute his research interests, keep him busy, and excited.

Web page: https://sites.google.com/site/silbersteinmark
AG 1, AG 2, AG 3, AG 4, AG 5, SWS, RG1, MMCI  
AG Audience
English

Date, Time and Location

Monday, 17 February 2014
14:00
90 Minutes
G26
113
Kaiserslautern

Abstract

Future applications will need to use programmable high-throughput accelerators like GPUs to achieve their performance and power goals. However building efficient systems that use accelerators today is incredibly difficult. I argue that the main problem lies in the lack of appropriate OS support for accelerators -- while OSes provide optimized resource management and I/O services to CPU applications, they make no such services available to accelerator programs.

I propose to build an operating system layer for GPUs which provides I/O services via familiar OS abstractions directly to programs running on GPUs. This layer effectively transforms GPUs into first-class computing devices with full I/O support, extending the constrained GPU-as-coprocessor programming model.

As a concrete example I will describe GPUfs, a software layer which enables GPU programs to access host files. GPUfs provides a POSIX-like API, exploits parallelism for efficiency, and optimizes for access locality by extending a CPU buffer cache into physical memories of all GPUs and CPUs in a single machine. Using real benchmarks I will show that GPUfs simplifies the development of efficient applications by eliminating the GPU management complexity, and broadens the range of applications that can be accelerated by GPUs. For example, a simple self-contained GPU program which searches for a set of strings in the entire tree of Linux kernel source files completes in about third of the time of an 8-core CPU run.

I will then describe my ongoing work on native network support for GPUs, current open problems and future directions.

The talk is self-contained, no background in GPU computing is necessary.

This is a joint work with Emmett Witchel, Bryan Ford, Idit Keidar and UT Austin students.

Contact

--email hidden

Video Broadcast

Yes
Saarbrücken
E1 5
029
passcode not visible
logged in users only

Susanne Girard, 02/11/2014 12:50
Susanne Girard, 02/10/2014 11:50 -- Created document.