Responsible AI news from Build

Microsoft Build took place more than a month ago, but only now I’m writting about it. I’m facing the danger of being nicknamed as one famous Brazilian formula 1 pilot. Internal Brazilian joke.

Build is a historical mark among community events. It implemented many new techniques to make an online event closer to an in-person event – and the result was great. Three days, almost non-stop, full of technical knowledge.

It’s impossible to write one single post about Build without missing a lot of information. Even me, crazy enough to follow the event for the entire three days, will miss details from sessions I haven’t attended.

You can watch the sessions on-demand on this link: but I hope with the help of the many posts I will write about Build you will be able to make good choices of sessions you want to watch.

Responsible AI

It’s very tempting the idea of having an AI which makes the decisions for you and you are not responsible for the decisions. However, this can lead to the worst Ayn Rand scenarios, where no one is responsible for anything, even evolving to a Skynet scenario.

We need to be responsible. If the AI denies a loan, an insurance coverage or payment, we need to be responsible for the result.

Notwithstanding, we also need to remember AI depends on data. If we have a small set of data about some groups of our society because our society isn’t fair, this can lead some bad AI models to have a “racist” behaviour.

In order to make us responsible, we need tools which are able to “open” AI models and explain to us why the model made one decision instead of another.

Let’s check some tools and source of information to achieve this.


This website provides tools to measure the fairness of your AI models using metrics about your models. You can read a step-by-step article about how to implement fairlearn API’s with Azure Machine Learning here:

There is also a video in Channel 9 about how to do the same thing:

There are also articles on the Microsoft Research blog about responsible AI:

In order to go deeper in the concept, there is this study made by Microsoft Research, the University of Maryland and the Carnegie Mellon University:

This other study explains the math behind an approach for fair classification:

More in deep details and many links on this document:

Webminar about Develop AI Responsibly:


InterpretML is a framework used to explain decisions made by AI models, allowing to check if the decision is fair.

The framework is available on github:

Here you can watch a video about how to use InterpretML:

Here you can watch a video about a use case of InterpretML in an airline company:


Dice is a Microsoft Research project used to make counterfactual explanations of ML model decisions.

For example, let’s say you had a loan rejected. Dice will provide you alternative possibilities which would result in the loan approval, such as “You would have received the loan if your income was higher by $10k”.

You can read more about DICE and get the framework on this link:

Responsible AI resources from Microsoft

This link is a central place for responsible AI with microsoft technology, with many videos, articles, frameworks and guidelines about how to ensure a responsible AI while building your models


Considering all these tools to ensure the quality of result from AI, movie characters such as HAL or Skynet are way far from reality than before.