There are dozens or perhaps hundreds of papers that claim that some data release that was thought to protect privacy does not. This has led to a widespread belief that anonymized data is easily attacked. As a result, organizations may stop releasing certain valuable data (Netflix recommendations, certain statistics on genetic studies), or apply strong anonymization that reduces the utility of the data (US Census Bureau, Facebook URLs dataset). In this broadly accessible talk, I will argue that most data anonymity attack papers don't measure privacy correctly, leading to conclusions that are at best invalid, and often very misleading. I will describe a more appropriate measure that we have used in the past in our anonymity bounty program. This is work in progress.