Privacy is a manufactured impediment to all of the social and commercial benefits of Big Data because there are already good ways to protect privacy.
Toronto, ON (PRWEB) October 23, 2014
Protecting personal privacy while leveraging Big Data for important research and business analytics may be achieved with the currently available methods and protocols, according to a group of experts from the fields of privacy, de-identification, cryptography, and statistics. Privacy is a manufactured impediment to all of the social and commercial benefits of Big Data because there are already good ways to protect privacy.
This was the message delivered to professionals responsible for collecting, disclosing and using personal information for data analytics purposes at the "De-Identification Symposium: Preserving Privacy AND Advancing Data Analytics," held earlier this week at Ryerson University in Toronto, Ontario.
As businesses and organizations share patient data with third parties for a variety of purposes – including analytics and research, product development, marketing, and surveillance—privacy is a growing concern, for both the individuals who serve as the data subjects and the organizations collecting and analyzing their personal data.
“Strong and scalable de-identification protocols provide a defensible way in which to address privacy concerns and legal obligations while simultaneously preserving the utility of the data for analysis,” said Khaled El Emam, Founder and CEO, Privacy Analytics and co-host of the symposium. “We should not sacrifice the incredible benefits of using data when there are good solutions to address these legitimate privacy concerns.”
Experts at the symposium identified several ‘responsible de-identification’ best practices for anonymizing data. Primary among them is the requirement for a full and detailed risk assessment process. Although there are different ways to approach this requirement, the experts agreed that the context of the data release must be taken into account in a defensible way. Contextual considerations include the type and content of the data itself, the type of organization receiving the data, how the data is being protected from a security point of view and how the data will be used. Each of these elements must be taken into consideration as part of a responsible approach to data de-identification.
“Much has been written about the demise of privacy and data protection. But this perspective often comes loaded with unrealistic expectations and the pursuit of non-existent zero-risk solutions,” said Dr. Ann Cavoukian, Executive Director of Ryerson’s Privacy and Big Data Institute and co-host of the symposium. “It is far better to pursue real world solutions through ‘Privacy by Design’ to deliver the doubly-enabling, win/win, positive-sum framework of privacy and data analytics.”
The symposium highlighted several ways in which de-identification is facilitating successful secondary use of data while protecting personal privacy including genetics research using data derived from electronic medical records (EHRs), diabetes research on a database of more than 500,000 patients, quality improvement and scientific discovery initiatives using data from a birth registry, and better understanding of tumor growth from re-analyzing historical clinical trials data.
By applying consistent de-identification standards, combined with effective risk management procedures, organizations can effectively protect patient and consumer privacy and ensure that data custodians remain legally compliant when sharing consumer data sets.