Memory safety is a major source of vulnerabilities in computer
systems. Capability machines are a type of CPUs that support
fine-grained privilege separation using capabilities, machine words
that include forms of authority. Over the last decade, CHERI, a family
of capability machines, has matured into an extensive design
featuring, among other, a full UNIX-style operating system, CheriBSD.
Building on ideas from CHERI, capability machines are even becoming a
reality in industry; the Arm Morello program is a research program led
by Arm to create a prototype system on chip with capabilities. One of
the promises of capability machines is that they can enforce security
properties that we expect from high-level languages, in particular
stack safety, even when machine code is linked with other untrusted
and possibly adversarial machine code. In this talk, I will discuss
what it takes to realise that promise in practice. Since stack safety
properties can be quite subtle, it is crucial to formally reason about
the enforcement mechanisms enabled by capabilities. This is a complex
task that involves reasoning about the interaction between known code,
and unknown untrusted code. We use Iris to formally reason about the
deep semantic properties of capability machines.