Machinelearning models are being increasingly adopted in a variety of real-world scenarios. However, the privacy and confidentiality implications for users in such scenarios is not well understood. In thistalk, I will present our work that focuses on better understanding and controlling leakage of information in three parts. The first part of thetalk is motivated by the fact that a massive amount of personal photos are disseminated by individuals, many of which unintentionally reveals a wide range of private information. To address this, I will present our work on visual privacy, which attempts to identify privacy leakage in visual data. The second part of my talk analyzes privacy leakage during training of ML models, such as in the case of federated learning where individuals contribute towards the training of a model by sharing parameter updates. The third part of my talk focuses on leakage of information at inference-time by presenting our work on model functionality stealing threats, where an adversary exploits black-box access to a victim ML model to inexpensively create a replica model. In each part, I also discuss our work to mitigate leakage of sensitive information to enable wide-spread adoption of ML techniques in real-world scenarios.