As deep neural networks are starting to be employed in safety critical applications like autonomous driving, there is a rising interest in making their inner workings more interpretable to allow inspection of failure cases. The complexity and opacity of these networks can make it difficult to explain why the network made a particular prediction. In this talk, I will present our ongoing work on how we address this issue via lifting the anonymity of features by quantifying their discriminativity for high-level concepts or classes and how we can utilize this information to approach more inspectable deep neural networks.