We are witnessing the introduction of “intelligent” autonomous systems in many tasks that were traditionally performed by humans. Examples of such systems include autonomous vehicles, surveillance drones, and robots in warehouses, to name only a few. Often, these systems are developed using data-driven algorithms that result in them being "black-box” models. However, for the widespread usage of black-box systems in safety-critical applications, it is imperative to develop adequate trust in their behavior.
Toward the goal of developing trust in intelligent systems, in this thesis, we propose techniques to (i) explain the behavior of these systems in a human-interpretable manner, and (ii) facilitate their verification through formal methods. We are particularly interested in explaining and formalizing the temporal behavior of these systems. To this end, we design several techniques to automatically learn temporal properties from observed executions of systems. We explore several different problem settings, such as learning from noisy data or from only positive data, and consider
several popular logical formalisms such as Linear Temporal Logic (LTL), Metric Temporal Logic (MTL), and Property Specification Language (PSL). In my thesis defense, I will give an overview of the various problem settings and techniques used to handle them. Further, I will demonstrate some of the empirical results obtained from the prototype implementation of the learning techniques.