Dr. Daniel Fabbri, Founder & CEO of Maize Analytics spoke at the US Department of Health and Human Services (HHS) conference on “Data Min(d)ing: Privacy and Our Digital Identities,” a meeting featuring a slate of privacy thought leaders from academia, government, and industry.
Dr. Fabbri’s presentation entitled Risks of ‘Black Box’ Machine Learning in Compliance and Privacy Programs, focused on how to effectively monitor accesses to patient data for inappropriate use using machine learning. Machine learning algorithms have the potential to automate the detection of snooping, identify theft and other threats by learning characteristics of good, bad, and anomalous access patterns.
The risk Dr. Fabbri warned about is known as ‘black box’ machine learning- when the algorithms do not allow privacy officers to decipher what they are doing. He stated, “if you cannot extract what the machine learning algorithm is doing, how can you define what your policy is, know it is correct, or defend it to regulators?”
Dr. Fabbri argued that machine learning algorithms may be better applied if they keep the compliance and privacy officer “in the loop.” This is achieved when the machine algorithms leverage large-scale data analytics to identify trends and patterns in access data, then recommend the policy (or reason for appropriate or inappropriate access) to the compliance officer. Next, the compliance officer has the opportunity to accept the policy or reject it, and the system applies the approved policy going forward.
Dr. Fabbri concluded, “Leveraging new automation tools are critical for covered entities to efficiently audit accesses to patients’ records, but to do so effectively they must pull back the curtain to ensure systems enforce their policies and do not make inappropriate assumptions on their behalf.”
Recommended Read: Interview with Luke Beatty, CEO and Chairman, Brandfolder