Max-Planck-Institut für Informatik
max planck institut
mpii logo Minerva of the Max Planck Society

MPI-INF or MPI-SWS or Local Campus Event Calendar

<< Previous Entry Next Entry >> New Event Entry Edit this Entry Login to DB (to update, delete)
What and Who
Title:Operating System Services for High-Throughput Accelerators
Speaker:Mark Silberstein
coming from:Technion, Israel Institute of Technology
Speakers Bio:Mark Silberstein is an Assistant Professor at the Electrical Engineering Department, Technion, Israel. He truly believes that building practical, programmable and efficient computer systems with computational accelerators requires cross-cutting changes in system interfaces, OS design, hardware mechanisms, storage and networking services, as well as programming models and parallel algorithms, all of which constitute his research interests, keep him busy, and excited.

Web page:

Event Type:SWS Colloquium
Visibility:D1, D2, D3, D4, D5, SWS, RG1, MMCI
We use this to send out email in the morning.
Level:AG Audience
Date, Time and Location
Date:Monday, 17 February 2014
Duration:90 Minutes
Future applications will need to use programmable high-throughput accelerators like GPUs to achieve their performance and power goals. However building efficient systems that use accelerators today is incredibly difficult. I argue that the main problem lies in the lack of appropriate OS support for accelerators -- while OSes provide optimized resource management and I/O services to CPU applications, they make no such services available to accelerator programs.

I propose to build an operating system layer for GPUs which provides I/O services via familiar OS abstractions directly to programs running on GPUs. This layer effectively transforms GPUs into first-class computing devices with full I/O support, extending the constrained GPU-as-coprocessor programming model.

As a concrete example I will describe GPUfs, a software layer which enables GPU programs to access host files. GPUfs provides a POSIX-like API, exploits parallelism for efficiency, and optimizes for access locality by extending a CPU buffer cache into physical memories of all GPUs and CPUs in a single machine. Using real benchmarks I will show that GPUfs simplifies the development of efficient applications by eliminating the GPU management complexity, and broadens the range of applications that can be accelerated by GPUs. For example, a simple self-contained GPU program which searches for a set of strings in the entire tree of Linux kernel source files completes in about third of the time of an 8-core CPU run.

I will then describe my ongoing work on native network support for GPUs, current open problems and future directions.

The talk is self-contained, no background in GPU computing is necessary.

This is a joint work with Emmett Witchel, Bryan Ford, Idit Keidar and UT Austin students.
Video Broadcast
Video Broadcast:YesTo Location:Saarbr├╝cken
To Building:E1 5To Room:029
Meeting ID:
Tags, Category, Keywords and additional notes
Attachments, File(s):
  • Susanne Girard, 02/11/2014 12:50 PM
  • Susanne Girard, 02/10/2014 11:50 AM -- Created document.