Face Value? Legal and Practical Risks in Using Facial-Analysis Tools for Hiring


In an era when artificial intelligence is rapidly reshaping recruiting, a provocative new study, recently discussed by The Economist under the headline “Should facial analysis help determine whom companies hire?”, suggests that algorithms can draw inferences from a candidate’s photograph about their future earnings and job-mobility. As lawyers counselling employers, we must consider not only the potential utility of such tools but also the substantial legal, ethical and compliance risks they present in the hiring context.


1. What the research claims

The paper underlying the discussion purports to analyze the faces of job-applicants (or perhaps MBA graduates) and draws statistical associations between facial features and outcomes such as post-MBA earnings and likelihood of moving jobs. Proponents suggest that if facial-analysis software can reliably flag traits correlated with performance, retention, or earnings, it may present a seemingly objective shortcut in screening.

2. Why many employers may find this tempting

From a business perspective, tools that promise predictive power over candidate success are highly attractive: faster screening, potential cost-savings, and differentiation in talent acquisition. For in-house counsel and HR leaders, the allure lies in quantification of an elusive risk-pool (i.e., “who will stay, who will succeed”).

3. But the legal and ethical landmines are real

a) Discrimination exposures

Employers must remember that under federal (and state) employment laws, making hiring decisions on the basis of protected characteristics (race, sex, age, disability, etc.) is strictly prohibited. If a facial-analysis tool infers traits that are correlated with such protected characteristics (intentionally or inadvertently), the employer may face disparate-impact liability. Even a model that is facially “race-blind” can indirectly replicate bias if the training data reflects historic inequalities or proxies for protected traits.

b) Validity and defensibility issues

In the hiring context, the key question is: does the tool predict job-relevant performance or success, and can the employer demonstrate that its use is job-related and consistent with business necessity? If not, the use may be legally indefensible. The study the article cites shows associations (e.g., face to earnings), but associations are not proof of causation or relevance to a specific job context.

c) Privacy and consent considerations

Using someone’s facial image for personality or performance prediction may trigger privacy, biometric, and data-protection issues — depending on jurisdiction. Employers must address consent, transparency, data security, retention, and usage-purpose issues.

d) Reputational and ethical risks

Even if “legal,” using facial-analysis tools can create employee-relations, brand-reputation and morale issues. The perception of “judging a candidate by their face” risks undermining trust and may deter talent.

4. Practical guidance for employers

As counsel to companies considering these tools, I recommend the following steps:

  • Due diligence on the vendor/algorithm: Ask for documentation on training data sets, model-validation studies, independence of testing, bias audits, and how protected class proxies are handled.
  • Link to bona fide job-relevant criteria: Get clear evidence that the attributes the tool predicts map to actual performance or retention in your specific roles. Maintain documentation.
  • Audit for disparate impact: Before deployment, simulate outcomes across demographics; after deployment, monitor for unexplained adverse impact.
  • Transparency and candidate communication: Ensure candidates know what’s being used, why, what it measures, and how their data will be retained or deleted.
  • Fallback to human decision-making: These tools should augment—not replace—qualified human evaluators. Employers should maintain appropriate oversight, review and override rights.
  • Legal and jurisdictional compliance: Remember that different states (and countries) may have biometric laws, privacy laws, or AI-specific regulation. For example, some states restrict use of biometric identifiers or high-stakes AI decision-making without human review.

5. Conclusion

While the emerging research may tempt employers with the promise of predictive insights from facial analysis, using such tools raises significant legal, ethical and compliance concerns. At DBL Law, we advise that any employer considering such technology should proceed with extreme caution. Treat the tool as a high-risk adjunct rather than a ready-made solution, build rigorous audit and governance frameworks around it, and always keep the “human in the loop” principle front-and-center. The face of technology may be alluring, but the face of liability can be far less pleasant..


DBL Law Insight

If you’d like assistance developing a vendor-risk assessment framework, vendor contract terms tailored to AI-hiring tools, or an audit program to evaluate disparate-impact risks, our team at DBL Law is ready to help.

For more information, contact:
Nick Birkenhauer, Attorney

More Insights

Subscribe To Our Legal Insights

By submitting this form, you are consenting to receive marketing emails from DBL Law. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email.

Name