In the self-adjusting computation model, programs can respond
automatically and efficiently to input changes by tracking the dynamic
data dependencies of the computation and incrementally updating the output
as needed. After a run from scratch, the input can be changed and the
output can be updated via change-propagation, a mechanism for re-executing
the portions of the computation affected by the changes while reusing the
unaffected parts. Previous research shows that self-adjusting computation
can be effective at achieving near-optimal update bounds for various
applications. We address the question of how to write and reason about
self-adjusting programs.
We propose a language-based technique for annotating ordinary programs and
compiling them into equivalent self-adjusting versions. We also provide a
cost semantics and a concept of trace distance that enables reasoning
about the effectiveness of self-adjusting computation at the source level.
To facilitate asymptotic analysis, we propose techniques for composing and
generalizing concrete distances via trace contexts (traces with holes).
The translation preserves the extensional semantics of the source
programs, the intensional cost of from-scratch runs, and ensures that
change-propagation between two evaluations takes time bounded by their
relative distance.