pink and blue image with a brain and heart held in the middle of hands
All stories

Navigating DEI and AI legislation: an in-house counsel's perspective

Prefer to watch or listen? This material was covered in our “Textio Talks: AI and DEI legislation” webinar. Access the recording here.

Textio customers are thoughtful. They research AI legislation, keep up with DEI news, and do their best to translate it all into applicable insights for their teams. As Textio’s Director of Legal, I also do that for our team—and our partners are often curious about how we’re seeing and sharing things internally.

To that end, I wanted to share my view as an in-house counsel on the fast-changing landscape surrounding DEI and AI legislation. My goal is to provide HR and talent leaders with information that will help you work closely with your in-house counsel to understand how new legislation impacts your organization.

First, a disclaimer: this is not legal advice. Rather, it is just an in-house counsel's perspective on DEI and AI legislation. Again: not legal advice.


First, we'll talk about AI: what it is, how talent leaders are using artificial intelligence in recruiting and employee advancement, the benefits and risks of using AI in recruitment and development, what lawmakers are doing to regulate AI, and what that means for talent leaders.

Then we'll discuss some of the new DEI legislation that has been passed in several states. We’ll talk about how it may impact your organization and how to work with your in-house counsel to navigate these new laws.

Let’s jump in.

What is AI?

What exactly is AI other than something we're hearing about in the news a lot lately? I think a definition is helpful: AI is the science of making computers able to perform functions that normally require human intelligence.

Essentially, a computer will use formulas designed to create specific outputs or algorithms that continuously learn, such that these outcomes are refined as the data volumes increase. And this happens without human intervention. It's pretty incredible technology. And we've been using it in a myriad of ways in our daily life. If you say “Hey Siri” or “Hey Google”—that's AI. Navigation and maps on your phone—that’s AI. There’s AI autocorrect. We text with AI chatbots on websites.

You get the point. It's everywhere. At home, at school, and most definitely in the workplace. If you aren't using it, trust me, your teams are, or tools that they use are.

In fact, 79% of companies are using some form of AI or automation to assist with recruitment.

How are employers using artificial intelligence in human resources and recruiting?

What are some ways AI systems are already in use in the employment world? We can use AI to sift through large amounts of resumes, to determine who should move on to the next step in the process. AI can be used in screener interviews to monitor tone and content. There are virtual assistants or chatbots that ask job candidates about their qualifications and reject those who do not meet predefined requirements.

AI is also used in testing software that provides job-fit scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived cultural fit, based on their performance on a game or on a more traditional test.

AI is also used in performance management. For example, companies with drivers use AI to track the movements of the drivers to ensure safety and also monitor their performance. There’s also employee monitoring software that rates employees on the basis of their keystrokes or other factors.


AI can bring significant benefits. The processing power and ability to analyze large amounts of data in a short amount of time is an enormous time saver. And AI can be used to provide employers with insights that humans may not be able to. When AI is trained properly, it can even help interrupt bias in AI real time.

But as we know, AI also presents risks. The output from AI programs can only be as good as the input. It learns by training on large data sets. If the information in those data sets is incorrect or contains biased or discriminatory content, so does the output. This is how you end up with gender bias in AI, or racial bias in AI, or several other biases. And it opens up opportunities for harm.

For example, if the training set for an AI system that analyzes resumes happens to come from 90% white male candidates, then the AI system may look for white male candidates and start excluding otherwise qualified individuals. This is an unintended consequence, because again, the output is only as good as the input.

The bottom line is AI is definitely here to support us. We just have to be mindful in how we use it, and adopt responsible AI. These risks are why the world's leaders are looking at AI and the law and starting to draft and pass rules.

What are the different types of AI regulation? Things are happening at the international, national, and state levels.

International AI legislation

You may recall that the EU was a leader in data protection regulation, passing the now almost infamous General Data Protection regulation, or GDPR. The EU is now again looking to lead the way, drafting what’s called the “Proposal for a Regulation laying down harmonized rules for artificial intelligence”—better known as the “EU AI Act.

The act creates a risk category system, and AI systems will be regulated based on the level of risk they pose to the health, safety, and fundamental rights of individuals. The categories are Unacceptable, High, Limited, and Minimal/None. While we don’t have specifics yet, the act currently does categorize “employment, workers management, and access to self-employment” as a high-risk category, which means it will be subject to what the Act calls “careful regulation” (also something not yet defined).

So this is a space we’ll all be watching closely because if GDPR is any indication, the EU AI Act may become a model for other countries’ AI policies. The Act is expected to be adopted in early 2024 with a transition period of at least 18 months before becoming fully enforceable.

U.S. national AI legislation

We also have three national rules that address AI: President Biden's executive order and regulations from two agencies, the Equal Employment Opportunity Commission (EEOC) and the Department of Justice (DOJ).

Let's talk first about the executive order: what does Biden's new AI executive order mean? President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence on October 30th of 2023. The order does several things:

  • Within the next year, the National Institute of Standards and Technology (NIST) is required to develop standards and procedures for developers of AI to conduct AI “red teaming” or security tests.
  • It directs numerous federal agencies to conduct studies and develop guidelines on the use of AI in their respective agency.
  • It calls on federal agencies to ensure AI technologies don't promote bias or discrimination.
  • It will require those private companies developing AI that potentially poses a serious risk to our national security, economic security, or public health or safety to report certain information and activities to the federal government.

The executive order has been called sweeping in its breadth, and it will certainly impact many public institutions and private companies. So it's another space we're watching closely.

In addition, in May of 2022, the EEOC published technical guidance for employers on how to measure adverse impact when AI employment tools are used. And the DOJ published guidance that warns employers about disability discrimination when using AI to make employment decisions.

U.S. state AI legislation

State legislators have also been busy. In the 2023 legislative session, 25 states, the District of Columbia, and Puerto Rico adopted resolutions or enacted legislation on AI. The majority of these laws suggest that most states are in the infancy of their regulatory AI journey. But a few are mature AI bills.

Let’s talk specifically about laws passed in Illinois, New York, and Texas.

Sign up for a free trial of Textio Lift

Start writing actionable feedback with Textio today


Illinois’s Artificial Intelligence Video Interview Act places obligations on employers of positions based in Illinois. These obligations pertain to notice and consent, confidentiality, destruction of videos, and reporting of demographic data.

With respect to notice and consent, an employer must provide notice and receive informed consent from an applicant before the employer can use AI to analyze and make decisions based on video interviews of applicants. The videos can't be shared except with those whose expertise or technology is required to evaluate the applicant's fitness for a position. And the video must be deleted within 30 days of a request from the applicant. Finally, an employee that relies solely upon an AI analysis of a video interview to determine whether an applicant will be selected for an in-person interview must collect and report both the race and ethnicity of applicants who are and are not afforded the opportunity for an in-person interview, as well as the race and ethnicity of applicants who are hired.

This data must be reported to the Illinois Department of Commerce and Economic Opportunity each year. The Department is then required by this law to analyze the data and report to the Governor and General Assembly each year whether there is a racial bias in the use of AI.

New York

New York City's Local Law 144 applies to residents of New York City. It requires employers that use an automated decision tool (AEDT) to assist with employment decisions to confirm that such tools have undergone a bias audit. The law defines an AEDT as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues a simplified output, including a score, classification, or recommendation that is used to substantially assist or replace discretionary employment decisions that impact natural persons.”

The bias audit must calculate the rate at which individuals in a category are either selected to move forward in the hiring process or assigned a classification by an AEDT, which is based on the categories required to be reported on pursuant to the aforementioned EEOC regulations. Employers must provide at least ten days notice to employees and job candidates on the use of the AEDT and instructions on how an applicant can request an alternative selection process. An employer must also provide information on the employment section of its website, including its AEDT data retention policy, the type of data collected by the AEDT, and the source of the data.

Violations of this law include penalties of not more than $500 for a first violation and not less than $500 or more than $1,500 for each subsequent violation. So penalties could be steep.


Texas House Bill 2060 is much more in line with legislation that other states are currently considering or have passed, in that it's in more of an exploratory phase. It establishes the Artificial Intelligence Advisory Council to study and monitor AI systems used by state agencies
and to submit a report on their findings at the end of 2024. State agencies must also provide an inventory report of all automated decision systems they're using by July of 2024.

U.S. DEI legislation

While the recent AI regulations and laws aim to address concerns around bias and harm, there has otherwise been a lack of pro-DEI legislation. Instead, unfortunately, legislative efforts have been moving in the opposite direction.

You’ll recall the recent Supreme Court ruling that struck down race-conscious admissions in higher education. This struck a blow to university affirmative action programs and sent many colleges and universities scrambling to seek alternate programs to promote diversity in their admissions processes.

In addition, as of July 2023, 40 bills have been introduced in 22 states that would place restrictions on DEI initiatives at public colleges. This is according to data compiled by the Chronicle of Higher Education, which is tracking this very closely.


In the image above, the darker green shows where laws have passed. The brown is showing where they were either tabled or failed. But that doesn't mean those bills or similar ones can't be reintroduced and passed in future sessions. So it's still helpful to know that these are states very much looking to make some changes to current DEI policies with respect to higher education.

Many of these bills aim to prohibit using federal or state funding to support DEI offices or staff at public colleges. They prohibit mandating diversity training, using diversity statements in hiring and promotion, or using identity-based preferences in hiring and admission. However, some are even more aggressive with laws passed in Texas and Florida stating that no public funds may be spent on DEI programs in higher education institutions.

The full impact of these laws is yet to be seen as public institutions wrestle with understanding what they can and cannot do with this newly passed legislation. Many are also wondering what impact these laws will have on the private sector. Candidate pools for roles requiring college degrees will likely change, and employers will need to make concerted efforts to ensure their companies retain and achieve their DEI missions and goals.

What other laws will come next? Who will they affect, directly or indirectly? At this point it's hard to say.

Three things talent leaders can do to comply with AI and DEI laws

What can talent leaders do to ensure compliance with these laws? First, work with your organization to implement an AI policy. Determine what you require from your AI vendors, how your company will use AI, and how it won't. Ask how your company is monitoring its use and ensuring improvements are implemented along the way.

Next, engage in bias audits. Check systems for disparate impact of underrepresented groups. Use software specifically designed to remove bias whenever possible. Textio is the only tool specifically designed for HR that detects bias from both people and machines. Textio CEO Kieran Snyder was invited to Congress to speak to legislators on the concerns with AI in the workplace.

Finally, work with your in-house counsel. Laws are changing quickly, and your in-house legal team is there to help make sure your company stays compliant with them. They can help explain which new laws apply, which don't, and what they mean for your organization.

Your in-house counsel can also help analyze the specifics of these new laws when necessary. They can translate the legalese into language that can be easily understood by senior leaders and other decision-makers in the company. In addition, as laws change, your internal policies may need to be updated as well, and in-house council can help draft and review those policy updates. Also, education and training: in-house counsel can be a great resource in educating and training your team and others in your organization about these new laws.

I hope that was helpful. Keep in contact with your legal team as questions and new laws come up, as they inevitably will. We’re tracking—and we’re here to help!

You can also learn more about our free trial of Textio Lift here.


All stories
Get stories like these delivered right to your inbox.