IBM Leaders Share How Algorithms and Bias Affect Us

 

To prevent biased algorithms you need to make sure you have unbiased training data on hand. You also need algorithms to be developed by a diverse set of people.
–Lisa Seacat DeLuca, Director of Offering Management and Distinguished Engineer for IBM Watson Internet of Things

 

How do we select vendors and what do we focus on? We must ask the question: what data did AI learn from, how did you arrive at the algorithm you’re using? Are you validating? What is the purpose of the data? How is the AI learning?
–Amber Grewal, Vice President of Global Talent Acquisition at IBM

 

Recently Amazon announced it had shut down a talent-finding algorithm built by its internal team. Why? Because it was perpetuating bias against women at the tech giant, which is unacceptable in today’s work environment.

With so many bots, algorithms and other tools being used to automate our work and personal lives, it’s important to think about how this affects each of us. Is there bias in the algorithms that drive our decisions? If so, how do we mitigate that?

In today’s episode, Ben talks with two IBM leaders with diverse perspectives on AI, bias, and more. Lisa Seacat DeLuca and Amber Grewal both join the show to talk about how they see AI benefiting the workplace but also how to watch for bias and prevent it from creeping into the finished product.

 

Originally pulbished on Upstart HR blog.

 

 

The SHRM Blog does not accept solicitation for guest posts.
COMMENTS 0

Add new comment

Please enter the text you see in the image below:
Image CAPTCHA