Making the right model assumptions is crucial in developing robust and
efficient computing systems. The ability of a model to solve
distributed computing problems is primarily defined by the /synchrony
assumptions/ the model makes. Given that many problems in distributed
computing are impossible to solve in the asynchronous way, it is very
important to determine the minimal synchrony assumptions that are
sufficient to solve a given problem. These assumptions can be
conveniently encapsulated in the /weakest failure detector/ abstraction.
In this talk, I will focus on defining the "weakest failure detector
ever": the failure detector that is strong enough to circumvent /some/
asynchronous impossibility and, at the same time, necessary to
circumvent /any/ asynchronous impossibility. In this context, I will
consider the /geometrical/ approach, based on modeling a system state of
as a high-dimensional geometrical object, and a distributed computation
as an evolution of this object in space. This approach has been shown
instrumental in characterizing the class of tasks solvable in
/asynchronous/ systems. I will argue that applying these ideas to
/partially synchronous/ systems may lead to automatic derivations of the
weakest failure detectors for various distributed computing problems,
and, eventually, to establishing a theory of distributed computational
complexity.