MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

THE FUTURE IS NOT FRAMED

David Luebke
University of Virginia
AG4 Group Meeting
AG 4  
AG Audience
English

Date, Time and Location

Tuesday, 12 April 2005
13:00
45 Minutes
46.1 - MPII
019
Saarbrücken

Abstract

The ultimate display will not show images. To drive the display of the

future, we must abandon our traditional concepts of pixels, and of
images as grids of coherent pixels, and of imagery as a sequence of
images.

So what is this ultimate display? One thing is obvious: the display of
the future will have incredibly high resolution. A typical monitor
today has 100 dpi—-far below a satisfactory printer. Several
technologies offer the prospect of much higher resolutions; even today
you can buy a 300 dpi e-book. Accounting for hyperacuity, one can make
the argument that a "perfect" desktop-sized monitor would require
about 6000 dpi—-call it 11 gigapixels. Even if we don't seek a perfect
monitor, we do want large displays. The very walls of our offices
should be active display surfaces, addressable to a resolution
comparable to or better than current monitors.

It's not just spatial resolution, either. We need higher temporal
resolution: hardcore gamers already use single buffering to reduce
delays. The human factors literature justifies this: even 15 ms of
delay can harm task performance. Exotic technologies (holographic,
autostereoscopic...) just increase the spatial, temporal, and
directional resolution required.

Suppose we settle for 1 gigapixel displays that can refresh at 240
Hz—-roughly 4000x typical display bandwidths today. Recomputing and
refreshing every pixel every time is a Bad Idea, for power and thermal
reasons if nothing else.

I will present an alternative: discard the frame. Send the display
streams of samples (location+color) instead of sequences of images.
Build hardware into the display to buffer and reconstruct images from
these samples. Exploit temporal coherence: send samples less often
where imagery is changing slowly. Exploit spatial coherence: send
fewer samples where imagery is low-frequency. Without the rigid
sampling patterns of framed renderers,sampling and reconstruction can
adapt with very fine granularity to spatio-temporal image change.
Sampling uses closed-loop feedback to guide sampling toward edges or
motion in the image. A temporally deep buffer stores all the samples
created over a short time interval for use in reconstruction.
Reconstruction responds both to sampling density and spatio-temporal
color gradients. I argue that this will reduce bandwidth requirements
by 1-2 orders of magnitude, and show results from our preliminary
experiments.

Contact

Volker Blanz
--email hidden
passcode not visible
logged in users only

Volker Blanz, 04/11/2005 17:39 -- Created document.