These behaviors can help counter AI bias, a Copenhagen Business School professor says

Poornima Luthra, an associate professor at Copenhagen Business School, recommends leaders practice these two inclusive behaviors when they use generative AI tools

As more organizations incorporate generative AI tools into their day-to-day work, bias in underlying models can be a silent saboteur, manifesting in several ways. Take, for instance, when an AI tool is trained on data sets associated with legacies of bias based on race, gender, and other grounds. It could skew toward a particular demographic in a talent-acquisition process or generate a market analysis that overlooks vital populations due to underrepresentation in the underlying data.

These types of bias-related risks are top of mind at some organizations. In Deloitte’s second-quarter 2024 edition of its State of Generative AI in the Enterprise  series, which surveyed nearly 2,000 business and tech leaders, 26% of respondents cited “potential bias causing negative consequences” as one of the top three risks their organization is most concerned about, related to generative AI tools. 

What best practices can gen AI users incorporate into their work to counter potential bias? Poornima Luthra, an associate professor at Copenhagen Business School and author of the book, The Art of Active Allyship, spoke with Deloitte Insights at the Thinkers50 gala in London, an organization and event celebrating achievements in business and leadership research, and shared two key behaviors she recommends all leaders practice when they’re using generative AI tools. Watch the video to learn more.

More about our spotlighted speaker

Poornima Luthra was honored in the Thinkers50 Radar Class of 2023, a list produced in collaboration with Deloitte US, spotlighting business and management thinkers whose ideas are most likely to shape the future. For more information about the Thinkers50 Radar list, visit https://thinkers50.com/radar-2024/.

Show more