As machine learning systems increasingly process sensitive data in critical domains, systematically assessing and mitigating privacy risks is essential. This talk explores quantitative approaches to evaluating privacy vulnerabilities in ML models, with a focus on membership and attribute inference attacks. I will present the MLPrivacy Meter, a framework that measures privacy risks and supports regulatory compliance by providing metrics to evaluate vulnerabilities across different models and training methods. I will also discuss open challenges in privacy risk quantification and propose directions for developing more robust and secure machine learning systems, laying the groundwork for future advancements in trustworthy systems.