MPI-INF Logo
Campus Event Calendar

Event Entry

New for: D2, D3

What and Who

Mesos: Multiprogramming for Datacenters

Prof. Ion Stoica
University of California, Berkeley
SWS Distinguished Lecture Series

Ion Stoica is an Associate Professor in the EECS Department at
University of California at Berkeley, where he does research on cloud
computing and networked computer systems. Past work includes the Chord
DHT, Dynamic Packet State (DPS), Internet Indirection Infrastructure
(i3), declarative networks, replay-debugging, and multi-layer tracing
in distributed systems. His current research includes resource
management and scheduling for data centers, cluster computing
frameworks, and network architectures. He is the recipient of a
SIGCOMM Test of Time Award (2011), the 2007 CoNEXT Rising Star Award,
a Sloan Foundation Fellowship (2003), a Presidential Early Career
Award for Scientists & Engineers (PECASE) (2002), and the 2001 ACM
doctoral dissertation award. In 2006, he co-founded Conviva, a startup
to commercialize technologies for large scale video distribution.
AG 1, AG 2, AG 3, AG 4, AG 5, SWS, RG1, MMCI  
Expert Audience
English

Date, Time and Location

Tuesday, 23 August 2011
13:30
90 Minutes
E1 5
5th floor
Saarbrücken

Abstract

Today's datacenters need to support a variety of
applications, and an even higher variety of dynamically changing
workloads. In this talk, I will present Mesos, a platform for sharing
commodity clusters between diverse computing frameworks, such as
Hadoop and MPI.  Sharing improves cluster utilization and avoids
per-framework data replication. To support the diverse requirements of
these frameworks, Mesos employs a two-level scheduling mechanism,
called resource offers. Mesos decides how many resources to offer each
framework, while frameworks decide which resources to accept and which
computations to schedule on these resources. To allocate resources
across frameworks, Mesos uses Dominant Resource Fairness (DRF). DRF
generalizes fair sharing to multiple-resources, provides sharing
incentives, and is strategy proof. Our experimental results show that
Mesos can achieve near-optimal locality when sharing the cluster among
diverse frameworks, can scale up to 50,000 nodes, and is resilient to
node failures.

Contact

Claudia Richter
9303 9103
--email hidden

Video Broadcast

Yes
Kaiserslautern
G26
206
passcode not visible
logged in users only

Carina Schmitt, 08/05/2011 14:40
Claudia Richter, 08/04/2011 16:53
Claudia Richter, 08/04/2011 15:31 -- Created document.