A graphic scale that says boring on one end and biased on the other
All stories

I wrote thousands of documents with ChatGPT, and here’s what I learned about bias

I usually love to write, especially pieces like this where I get to tell stories and share data. However, late last year, I had a document to write that I was not excited about. We had just launched Textio’s new performance management product, we were making several changes to our revenue operations in 2023, and I needed to write down the plan. I had already done all the interesting thinking behind the plan. The idea of turning my notes into a dry long-form document felt pretty tedious.

Still, I knew it needed to be done. Writing down a plan in complete sentences and turning the thinking into a shared artifact has huge operational and cultural significance inside workplaces. Getting to a fully documented plan is the way we make sure that we’ve thought through important decisions and projects down to the details. It’s also the way we make sure that everyone is aligned.

I’d been playing with ChatGPT for a couple of weeks at that point. I’ve spent my entire career making and using language models. We first launched our own generative AI features inside Textio in 2018. Even still, I’d been impressed with my initial experiences with ChatGPT. More than anything else, using the app every day for a couple of weeks, I was impressed at how quickly the OpenAI team appeared to be iterating and improving. I decided to dump my document outline into ChatGPT to see what would happen.

View the full ChatGPT series

Textio found biased language in thousands of workplace documents written by ChatGPT

To be honest, it didn’t do that great. My thoughtful outline was repackaged into a 200-word summary of the 2,000 words that I estimated were needed for the task at hand. ChatGPT had done the opposite of what I’d been hoping for; rather than helping me explain and elaborate on my ideas, it obscured all the interesting details.

So I tried again. This time, I broke my outline into pieces. I fed my notes into ChatGPT section by section. What happened was a revelation. One section at a time, ChatGPT generated writing that was about 75% as good as what I needed. Editing it to a full doc that was good enough to publish took me a half hour. Writing the whole thing on my own would have taken me three or four hours. The final doc wasn’t as good as what I would have written on my own, but considering the time savings and the highly functional nature of the doc in question, it was close enough.

This got me thinking along two lines. First, just like any conversation, a conversation with ChatGPT is two-sided. What I say changes how the app responds. By changing how I structured my prompt, I was able to change ChatGPT’s response into something more or less useful to me. How would I have to structure my prompts to make its writing effective, appropriate, and engaging for its intended audience?

Secondly, it was clear that I could get ChatGPT to write any kind of document I asked for. If I could do that consistently, then what kind of bias would show up in ChatGPT’s writing?

To explore that deeply, I spent the next few weeks using ChatGPT to write thousands of job posts, email messages, pieces of workplace performance feedback, and other kinds of writing where Textio is great at sniffing out bias. This series shares what I learned. The headline? The most compelling content that ChatGPT writes is also the most biased.

At the end of this series, I’ll share a final wrap-up telling you about the actual writing in these articles. I’ll tell you where I used ChatGPT to help me and where I did not. By the end of the series, though, you might already be able to guess on your own.

Next up: ChatGPT writes job posts; ChatGPT writes performance feedback; ChatGPT writes recruiting mail; ChatGPT rewrites feedback; ChatGPT writes valentines.

All stories
Get stories like these delivered right to your inbox.