Synthesizing photo-realistic images is of crucial importance for a large variety of applications, including movies, telepresence, design visualizations, or training simulators. New technologies such as virtual and augmented reality require high-quality renderings at high frame rates, which poses a tremendous challenge for the design of rendering algorithms. Traditionally, this challenge has been attacked either by simulating light transport from physical first principles, or by re-using existing images to create new views. In recent years, data-driven techniques have risen as a third pillar in the space of rendering algorithms. All approaches have their distinct pros and cons, giving rise to a continuum of hybrid algorithms trying to merge the best of the three worlds. In this talk, I will give an overview over this continuum, with an emphasis on our more recent work, which seeks to reconcile image quality, rendering efficiency, and controllability.