Microsoft introduces new tools for responsible AI

by Jeremy

Microsoft has announced new capabilities in its responsible AI (RAI) toolkits for helping data scientists reduce bias within their machine learning models. Last May, at Microsoft Build, it announced three tools for the tool: InterpretML, Fairlearn, and SmartNoise. 

SmartNoise is a collaboration between Microsoft and Harvard and is used to protect personal data while allowing researchers to gather insights using differential privacy. SmartNoise now offers the ability to use synthetic data, a created sample derived from the original dataset. 

Example of Error Analysis exposing the distribution of errors.

By combining the synthetic dataset with the original dataset, researchers can continue to analyze the same dataset without increasing privacy risk. The company explained that the artificial data capability would increase collaboration between research parties, democratize knowledge, and open dataset initiatives. 

Microsoft has also announced the release of a new tool called Error Analysis. This tool will enable data scientists to understand the patterns in their errors, identify subgroups with higher inaccuracy, and visually diagnose the root causes of the errors. 

According to Microsoft, Error Analysis can be used to dive deeper into questions such as: “Will the self-driving car recognition model still perform well even when it is dark and snowing outside?” or “Does the loan approval model perform similarly for population cohorts across ethnicity, gender, age, and education?”

Error Analysis had already been in use within Microsoft. It started as a research project in 2018 as part of a collaboration between Microsoft Research and the AI, Ethics, and Effects in Engineering and Research (AETHER) Committee. 

In the future, Microsoft plans to add Error Analysis and other RAI tools to a more extensive model assessment dashboard that is expected to be available in mid-2021 in OSS and Azure Machine Learning. 

“The work doesn’t stop here. We continue to expand the capabilities in FairLearn, InterpretML, Error Analysis, and SmartNoise. We hope you’ll join us on GitHub and contribute directly to helping everyone build AI responsibly,” Sarah Bird, principal program manager at Microsoft; Besmira Nushi, principal researcher at Microsoft; and Mehrnoosh Sameki, senior program manager at Microsoft wrote in a post.

Related Posts

Leave a Comment