TrendPulse Logo

Do robots have a race problem? Not all scholars agree

Source: Scientific AmericanView Original
scienceApril 8, 2026

April 8, 2026

7 min read

Add Us On GoogleAdd SciAm

Do robots have a race problem? Not all scholars agree

As humanoid robots enter the real world, new studies suggest that people project human racial biases onto them—but the research is divided on whether those biases persist outside the lab and in real-world interactions

By Deni Ellis Béchard edited by Jeanna Bryner

An Optimus humanoid exhibited by Tesla at the World Artificial Intelligence Conference in Shanghai, China Monday, July 28, 2025.

LONG WEI / Feature China/Future Publishing via Getty Images

When researchers asked more than 1,000 Americans to assign colors to robots according to the robot’s job, they found that biases familiar from the human workplace resurfaced—and that the people making the choices rarely recognized them as biases. The patterns were strong enough to predict which robot would be picked for which role, yet participants explained themselves in the neutral language of practicality, not prejudice. As humanoid machines move from research labs onto factory floors and into hospitals, that gap between what people choose and what people think they’re choosing is precisely what worries the researchers: a workforce of robots could end up sorted by the same hierarchies that sort the human one, with no one willing to call it that.

The study, published in conference proceedings in March 2026 by researchers Jiangen He, Wanqi Zhang and Jessica K. Barfield, joins a growing body of robotics research that often disagrees about whether people perceive robots as having a race at all. It also arrives at a time when questions about humanoid robot design are about to stop being academic. Tesla CEO Elon Musk says he will convert part of a factory in Fremont, Calif., to produce its Optimus robots. Chinese firms such as Unitree Robotics are shipping backflipping robots to consumers, and Figure AI’s humanoids are working on BMW assembly lines. “Assigning appearance to a social robot is never a purely aesthetic choice,” He, Zhang and Barfield write in a paper posted to the preprint server arXiv.org that expands on the study. “It is a profound socio-technical intervention requiring intentional ethical design.”

For the study, the researchers recruited participants through the survey platform Prolific and showed each of them four workplace scenes without any human figures: a construction site, a hospital, a home tutoring setup and a sports field. For every scene, participants picked one robot from a lineup of six that differed only in color—there were four skin tones ranging from light to dark, plus a silver and a teal option meant as nonracial baselines. Roughly half chose silver or teal across the scenarios. But when participants selected a skin-toned robot, the results tracked with stereotypes that researchers have documented between Latinos and manual labor, Asians and academic competence, Black people and athletic ability, and white people and professional roles. In a second experiment with a different group of participants, the researchers added human professionals—a Latino construction worker, a white doctor, an Asian tutor and a Black athlete—to the same scenes. The bias sharpened: those participants were nearly six times more likely than the first group to pick a robot whose skin tone matched the worker they had just seen.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

The team also asked participants to explain their robot color choices. “We wanted to dig in deeper to the reasons why certain robots were chosen for certain positions,” says Barfield, a researcher at the University of Kentucky. Many participants justified white robots for health care settings because they looked cleaner and dark robots for construction because they were less likely to show dirt, the researchers report in their arXiv.org preprint. A different pattern emerged when the researchers zoomed in on the moments in which participants happened to pick a robot whose skin tone matched their own for a job. White and Asian participants tended to reach for psychological and affective reasoning, saying that the robots made them feel calm or that they personally liked the color. By contrast, Black participants who selected dark-skinned robots gave functional justifications. “They would say, ‘Oh, this robot looks stronger or looks more useful’—this kind of more functional reason,” says He, a researcher at the University of Tennessee, Knoxville.

In what is called racial mirroring, people have a tendency to feel affective resonance with agents that look like them, the researchers explain in the preprint. The finding suggests that mirrori