Today, in collaboration with Harvard University’s Berkman Klein Center, we at Microsoft are publishing a series of materials we believe will contribute to solving a major challenge to securing artificial intelligence and machine learning systems. In short, there is no common terminology today to discuss security threats to these systems and methods to mitigate them, and we hope these new materials will provide baseline language that will enable the research community to better collaborate.
Here is why this challenge is so important to address. Artificial intelligence (AI) is already having an enormous and positive impact on healthcare, the environment, and a host of other societal needs. As these systems become increasingly important to our lives, it’s critical that when they fail that we understand how and why, whether it’s inherent design of a system or the result of an adversary. There have been hundreds of research papers dedicated to this topic, but inconsistent vocabulary from paper to paper has limited the usefulness of important research to data scientists, security engineers, incident responders and policymakers.
The centerpiece of the materials we’re publishing today is called “Failure Modes in Machine Learning,” which lays out the terminology we developed jointly with the Berkman Klein Center. It includes vocabulary that can be used to describe intentional failure caused by an adversary attempting to alter results or steal an algorithm as well as vocabulary for unintentional failures like a system that produces results that might be unsafe…
The entire post Solving the challenge of securing AI and machine learning systems appeared first on Microsoft on the Issues.