Dall•E 2 painting of Washington DC
All stories

Textio talks AI and bias in Washington

Last week, I traveled to Capitol Hill to provide a private briefing on AI, people systems, and bias for the Congressional Caucus for AI. Congress is currently highly focused on building effective AI policy. If you’re finding that the landscape is moving so fast it’s difficult to keep up with, you’re not alone. Our lawmakers feel it too.

Textio is the only communication software designed purposefully for HR, and it was a real honor to be part of the emerging legislative conversation on AI policy. It was even more encouraging that our nation’s lawmakers are taking the topic of bias in people systems seriously in this process. We’ve used generative techniques in Textio’s product for several years, helping people write job descriptions, recruiting mail, performance reviews, and other employee communication effectively and without bias. It was an incredible privilege to have our leadership recognized and be included in the handful of innovators and business leaders invited to share a perspective with Congress.

I recorded a version of my 10-minute Congressional demo to share with the public as well. View it here.

In my presentation to the Congressional Caucus, I reviewed some common biases that show up in generative AI when it isn’t designed purposefully. In addition, I touched on the pressure that organizations are under to get this right. As the SEC has required deeper disclosures about employee hiring, retention, and DEI over the last couple of years, organizations are under more public scrutiny than ever before. This makes questions about AI and bias especially important.

In the conversation with the caucus, we discussed how AI can actually help drive equity in the workplace by automatically detecting biases that people can’t spot on their own. However, AI can also produce and perpetuate these biases. In fact, as we discussed with Congress, AI systems are biased by default unless the data sets underneath them have been designed intentionally to mitigate bias from the beginning.

We also considered more purposeful alternatives. In particular, we looked at what AI can do to spot biases in writing, whether the content is written by people or by other AI.

I’m so appreciative that Congress is approaching the AI conversation with the curiosity this topic deserves. I’m even more appreciative of how seriously our legislators are taking into account key considerations like bias, IP rights, data privacy, and trustworthiness of content. Most encouragingly, at the moment, the discussion doesn’t feel partisan. It feels like a concerned group of lawmakers working to make the best decisions they can for our public policy.

View a version of what I presented to Congress on AI, HR, bias, and equity here.

All stories
Get stories like these delivered right to your inbox.