The pace at which artificial intelligence (AI) is advancing is remarkable. As we look out at the next few years for this field, one thing is clear: AI will be celebrated for its benefits but also scrutinized and, to some degree, feared. It remains our belief that, for AI to benefit everyone, it must be developed and used in ways which warrant people’s trust. Microsoft’s approach, which is based on our AI principles, is focused on proactively establishing guardrails for AI systems so that we can make sure that their risks are anticipated and mitigated, and their benefits are maximized…
Governance as a foundation for compliance
While there is much that is new and unchartered in the domain of responsible AI, there’s also much that can be learned from adjacent domains. Our responsible AI governance approach borrows the hub-and-spoke model that has worked successfully to integrate privacy, security and accessibility into our products and services…
Developing rules to enact our principles
In the fall of 2019, we published internally the first version of our Responsible AI Standard, a set of rules for how we enact our responsible AI principles underpinned by Microsoft’s corporate policy. We published the first version of the Standard with an eye to learning, and with a humble recognition that we were at the beginning of our effort to systematically move from principles to practices. Through a phased pilot across 10 engineering groups and two customer-facing teams, we learned what worked and what did not. Our pilot teams appreciated the examples of how responsible AI concerns can arise. They also struggled sometimes with the open-endedness of the considerations laid out in the Standard and expressed a desire for more concrete requirements and criteria. There was a thirst for more tools, templates, and systems, and for a closer integration with existing development practices…
Drawing red lines and working through the grey areas
In the fast-moving and nuanced practice of responsible AI, it is impossible to reduce all the complex sociotechnical considerations into an exhaustive set of pre-defined rules. This led us to create a process for ongoing review and oversight of high-impact cases and rising issues and questions…
Evolving our mindset and asking hard questions
Today, we understand that it is critically important for our employees to think holistically about the AI systems we choose to build. As part of this, we all need to think deeply about and account for sociotechnical impacts. That’s why we’ve developed training and practices to help our teams build the muscle of asking ground-zero questions, such as, “Why are we building this AI system?” and, “Is the AI technology at the core of this system ready for this application?”
In 2020, our mandatory Introduction to Responsible AI training helped more than 145,000 employees learn the sensitive use process, the Responsible AI Standard and the foundations of our AI principles…
Pioneering new engineering practices
Privacy, and the GDPR experience in particular, taught us the importance of engineered systems and tools for enacting a new initiative at scale and ensuring that key considerations are baked in by design.
As we have been rolling out our responsible AI program across the company, the existence of engineering systems and tools to help deliver on our responsible AI commitments has been a priority for our teams. Although tooling – particularly in its most technical sense – is not capable of the deep, human-centered thinking work that needs to be undertaken while conceiving AI systems, we think it is important to develop repeatable tools, patterns and practices where possible so the creative thought of our engineering teams can be directed toward the most novel and unique challenges, not reinventing the wheel. Integrated systems and tools also help drive consistency and ensure that responsible AI is part of the everyday way in which our engineering teams work…
Scaling our efforts to develop AI responsibly
As we look ahead, we’ll focus on three things: first, consistently and systematically enacting our principles through the continued rollout of our Responsible AI Standard; second, advancing the state of the art of responsible AI through research-to-practice incubations and new engineering systems and tools; third, continuing to build a culture of responsible AI across the company.
We are acutely aware that, as the adoption of AI technologies accelerates, new and complex ethical challenges will arise. While we recognize that we don’t have all the answers, the building blocks of our approach to responsible AI at Microsoft are designed to help us stay ahead of these challenges and enact a deliberate and principled approach. We will continue to share what we learn, and we welcome opportunities to learn with others.
The rest of the post The building blocks of Microsoft’s responsible AI program appeared first on Microsoft On the Issues.
from Microsoft On the Issues https://ift.tt/35Vq4Fw
via IFTTT