AI Ethicist and Author Olivia Gambelin on How Responsible AI Development Leads to Better Innovation and More Trust

No technology has so quickly dominated the zeitgeist as generative AI. It’s sparked debate between those who think it could save humanity and those who fear it will drive us to dystopia. Concerns about accuracy, bias and fairness in particular have led to calls for development of responsible AI.

AI ethicist and author Olivia Gambelin has been at the forefront of this movement. She works with business leaders on the operational and strategic development of Responsible AI and writes about it in a newsletter, “In Pursuit of Good Tech,” and a new book, “Responsible AI: Implement an Ethical Approach in Your Organization,” which debuted in the U.S. on June 25.

We talked to Olivia recently about the importance of clear communications around AI, one big brand that’s doing it well, and how creating ethical parameters can lead to more innovation. Below is an edited version of the conversation.

Tell me about your new book. Why did you write it and who is it for?

It’s essentially the playbook on designing an AI strategy on responsible foundations. When we first started developing AI, there weren't any best practices, standards, or orders of operation. Now, with responsible AI, we know what works and what doesn't in terms of development standards. The book essentially answers the question of, ‘Where do I start?’ It's written for executives and leadership teams that are tasked with developing AI strategies for their entire organization. At the end of the day, responsible AI is just getting the intended use out of your solution.

Let’s take that one step further. What exactly do you mean by ‘Responsible AI’?

It’s essentially AI that does what you intended it to do. Often, it’s built with a goal in mind but developers aren’t assessing the impact or following best practices, so they end up with differing results that stray from the original intent. There’s a quality example that doesn’t involve AI — Facebook created the ‘like’ button as a way to interact with friends. That was the original intention. But the potential impact wasn’t properly assessed, and unfortunately that ‘like’ button actually spiked the suicide rate.

<split-lines>"[Responsible AI] is essentially AI that does what you intended it to do..."<split-lines>

We work with the Mozilla Foundation and they have a whole whitepaper on Trustworthy AI that talks about transparency and fairness. Is that similar to the notion of responsible AI?

I've worked with Mozilla; they're great people. And I really like their frameworks – as they're looking at those specific values. What my book does is enable business leaders to translate those values into practice. This can feel very abstract, but my book is the scaffolding for the concept and explanation of how you put the principles into action.

In a recent LinkedIn post, you wrote: ‘Responsible AI has lost sight of one of its core founding values.’ Can you explain that?

The dominant narrative that we're seeing around responsible AI relates to the criticism that it’s ‘taking over’ our work and lives. So, responsible AI has taken on a doom-and-gloom-type narrative. This creates the mindset that the only goal is prevention of harm. But, that really wasn't one of the founding values. Risk mitigation is very important, don’t get me wrong. But IT companies are not as motivated to adopt responsible AI practices if they think it’s too hard to do.

The current narrative is missing the fact that responsible AI can be a practice for innovation. Applying ethical constraints can actually increase and improve the quality and rate of creativity because it focuses your thinking. Engineers like understanding their parameters, and they can get creative to work with those limitations. It's the same thing when you're bringing in ethical limitations. You say, ‘Here's where you can't cross the line. Go figure out how to reach that while staying within these guardrails.’ That spurs creativity that forces a deeper layer of innovation. The original sentiment behind technical innovation was to drive human impact. Responsible AI needs to return to that message – forcing new depth to the design and development of AI tools.

<split-lines>"Applying ethical constraints can actually increase and improve the quality and rate of creativity because it focuses your thinking."<split-lines>

When you look at the major players who are doing comms around this issue, who’s doing it well?

One star example is Salesforce. From day one, they built out a responsible AI team. They brought in the World Economic Forum to do a case study on their KPI motivators that incorporate responsible AI practices and ethical values. They give examples, concrete case studies, education and awareness. And they have a strong thought leader in Kathy Baxter. You know who's on it there, and you can see the work they do.

What is the role of communications in shaping these responsible AI strategies?

With large companies, there are so many different information silos, and so different data science teams are working on completely different standards of data governance. A Fortune 100 bank I was working with was trying to mitigate unwanted risk and bias; it turns out every team was using a different fairness metric.

When it comes to external communication, the user feedback mechanism can be a very powerful tool in developing AI strategies. One of the big mistakes companies make is not having that live, incremental feedback from their users. When you have those feedback loops in place, you're able to catch problems faster and fine tune your models.

<split-lines>"When you have those feedback loops in place, you're able to catch problems faster and fine tune your models."<split-lines>

Communication is also important for setting expectations. We're in an economy based on trust and there is a massive amount of mistrust, fear and confusion when it comes to companies’ approaches to AI. A comms team has the responsibility and the opportunity to help transparently create trust with users. Nowadays, trust is a determining factor for users, especially younger generations.

More posts

November 21, 2024

November 19, 2024

Client Stories
Enterprise

How ezCater CTO Erin DeCesare Empowers Teams Through Creativity

November 21, 2024

November 12, 2024

Impact of Tech
Enterprise

HumanX CEO: ‘AI’s Biggest Potential Is the Work We Cannot Do Ourselves’

November 21, 2024

November 11, 2024

Expert Insights
Enterprise

‘Helping Startups Become Known and Understood’: A Q&A With Allison Braley of Bain Capital

November 21, 2024

October 24, 2024

Expert Insights
IPO Communications

From Growth Drivers to IPO Timing: A Q&A With Leslie Pfrang of Class V Group