The recent success stories of reinforcement learning in game playing have demonstrated the utility of the reinforcement learning framework in deriving scalable solutions to multi-agent sequential decision making problems. However, applying these solutions beyond simulated environments requires additional building blocks that would enable trustworthy decision making. In this talk, I will present some of our recent results that relate to robustness and accountability---two properties that any decision making system ought to satisfy in order to be deemed trustworthy. These results showcase some of the challenges in designing agents and support systems for robust and accountable multi-agent sequential decision making.