Dinis Guarda interviews Juan M. Lavista Ferres, Microsoft CVP and Chief Data Scientist, and Director of Microsoft AI for Good Lab, to discuss the key AI For Good initiatives taken by Microsoft for the benefit of humanity. The Dinis Guarda YouTube Podcast is powered by Businessabc.net, Citiesabc.com, and Wisdomia.ai.

Juan M. Lavista Ferres is the Corporate Vice President and Chief Data Scientist at Microsoft, and The Director of Microsoft AI for Good Lab, an initiative that harnesses the power of artificial intelligence to address some of the world's most pressing challenges. 

Microsoft AI For Good Lab focuses on areas such as sustainability, humanitarian action, accessibility, and health. Co-Founded by Juan Lavista Ferres in 2018, the AI for Good Lab in has undertaken over 200 projects worldwide. Highlighting the idea behind creating the AI For Good Lab, Juan told Dinis:

Majority of the organisations of the world that are working to solve some of the world's greatest challenges, whether it's infant mortality, or working on disabilities, or working in sustainability.  What we realised is that right now there is a gap in society since they don't have the structure or capacity to analyse the data collected from cities and societies.  For example, I want to understand how many houses were destroyed because of an earthquake or if you're looking for particular areas of cancer in a city scan. These are just the data points, of course, and you need to work with some expert to understand. I think from a return of investment of the impact of that social responsibility, it made more sense for us instead of donating capital or donating money to some of these organisations, we would donate our skills, we donate our time. We try our best to help organisations and teach them and try to solve the problems, and knowledge transfer so they become self sustainable.

That was the whole foundation of the team, and we think today that it was certainly the right decision. We think that's the type of Return of investment from a society point of view.”

The Microsoft AI For Good Lab collaborates with prominent organisations like the United Nations, American Red Cross, The Nature Conservancy Group, PATH, Seattle Children’s Hospital, Harvard University, Stanford University, and Johns Hopkins University, consistently using AI for the betterment of humanity. 

AI For Good: Key Pillars

The AI for Good initiative at Microsoft is structured around four main areas: AI for Earth, AI for Humanitarian Action, AI for Accessibility, and AI for Health. Each of these pillars addresses specific global challenges, from environmental sustainability to improving healthcare access. Jaun explains each facet of the AI For Good initiative:

The first area where we could help is to care for Earth or we call it sustainability. These are the projects where we go in collaboration with organisations, for example, in Colombia working to try to understand and measure not only deforestation but also detect illegal deforestation in the Amazon. 

There are about 1.3 billion people in the world with a disability, and about 300 million people that are blind or visually impaired. Now thanks to AI models like vision models, if you have a smartphone they have the possibility to navigate the world all through these vision models in a way that they couldn't do before. Partner organisations working on Internal Bing AI development are dedicating their development efforts to help people with disabilities to use AI and make sure that they could navigate the world. I always say that for people that don't understand the value of AI they should one day sit with people that are using these apps because for them it is a game changer.

We live in a world where 50% of the world do not have access to healthcare. We need to use AI as a society to help in the screening of patients and understand the patient behaviours and finding a treatment pattern. We also work a lot in medical imaging. We think that there's a lot of value there and we're just scratching the surface.

We have also worked a lot particularly in the area of disaster response. Whenever there is a big natural disaster, we work with satellite data companies because they put on top of the disaster taking pictures and we model on top of those pictures to build the maps that are used by an organisation on the ground, whether it's the American Red Cross or UNHCR or any other organisation. These maps are used by these organisations to help them do their job, make sure that they prioritise and understand the logistics, and make sure they deploy enough people to help the people affected on the ground.”

AI for Good: Applications in Sustainability, Humanitarian Action and Health

AI For Good is authored by Juan M. Lavista Ferres, published by Wiley in April 2024. The book explores the transformative potential of artificial intelligence in addressing critical global challenges. 

We started the book with the vision that whenever we were working with organisations around the world part of one of the most difficult aspects was to show what they could do with AI. Through those conversations with our partners, AI became tool for us to help them have an impact. We were showcasing what they could do with AI and that was basically the whole idea to write the book; to have these examples. The target audience for that book was the people that we work with, these researchers that are not necessarily in the AI space, but that are working in other space. They're trying to use technology to help solve their problems. Our objective was to have all these examples in a way that is easy for us to show them the power of AI and technology.  The book has 29 different examples of how we're using AI to help solve these work challenges in all of the areas from accessibility to health to sustainability to humanitarian action.”

Driving Change and Managing Risks

During the interview, Jaun also highlighted the risks of AI. He acknowledges the growing concerns around deepfakes and misinformation, areas where the AI for Good Lab is actively working to develop solutions. He highlights it to Dinis:

In many ways, computer science or software development are all part of tools that we, as society, decide to use to help solve problems. It is not different to what we have been doing for centuries now, of creating new tools to help us. I don’t see the reason to stay away from AI because there's not a very clear definition of AGI. We collectively need to make sure that we maximise the possibility of using these tools and reduce the chance that it can be used as a weapon or for bad purposes.

No matter how much you work, there will be always people who will find ways to use them in wrong ways and we have seen this already in the case of gen AI where people are using gen AI for scamming people through cloning voices, creating deepfakes.

We have been working with our team on using AI technology to help detect some of the fakes and try to better educate the public about these problems. 

I'm always optimistic, I see the risk for this technology, but also when we look at the history of humanity, we have something that is common - the society works together to make sure that at the end we are using, and we are setting the right foundations for maximising the use tools and to improve as a society.

The whole idea of responsible AI and having a response framework is that we saw very early that there are some risks, and we need to make sure that people understand these risks and make sure that we mitigate as much as possible.”