Skip to main content

Our tools redefine what it means to be us: perceived robotic agency decreases the importance of agency in humanity

Abstract

Past work has primarily focused on how the perception of robotic agency influences human–robot interaction and the evaluation of robotic progress, while overlooking its impact on reconsidering what it means to be human. Drawing on social identity theory, we proposed that perceived robotic agency diminishes the importance of agency in humanity. We conducted three experiments (N = 920) to test this assumption. Experiments 1 and 2 manipulated perceived robotic agency. Experiments 2 and 3 separately measured and manipulated distinctiveness threat to investigate the underlying mechanism. Results revealed that high (vs. low) perceived robotic agency reduced ratings of the essentiality of agency in defining humanity (Experiments 1 and 2); distinctiveness threat accounted for this effect (Experiments 2 and 3). The findings contribute to a novel understanding of how ascriptions of humanity are evolving in the AI era.

Peer Review reports

“We shape our tools, and afterwards our tools shape us.”

——John M. Culkin (1967, p. 70)

Recent years have witnessed the rapid and extraordinary development of artificial intelligence (AI), particularly in the realm of robotics. For instance, AlphaGo has become the top Go player, defeating numerous human champions [50], and medical AI has achieved expert-level accuracy in disease diagnosis [15, 40]. The human-like or even superior performance of AI robots prompts people to attribute mental capacities or minds to these machines [3, 12, 23]. In particular, as a core dimension of minds, agency—the capacity for thought and action [22] has been perceived significantly increased. The rise of machine agency has introduced a fundamental tension between robotic and human agency, prompting a search for balance between the two [59]. Accordingly, researchers have largely focused on the impact of perceived robotic agency on human–robot interactions (e.g., [2, 7, 10, 11, 57]).

However, in the face of robotic progress, humans’ reactions not only point to those nonhuman entities but can also reflect back on themselves. As Culkin [8] claimed, humans’ endeavors to create more innovative tools are always accompanied by a refreshed understanding of what it means to be a human. Likewise, the dual nature of computational objects, such as robots—both as things and as human-like entities—is believed to evoke a reconsideration of humanness [17, 62]. This raises an important yet underexplored question: Does the perception of robotic agency influence how people ascribe agency to humanity? Inspired by the social identity theory, we posited that individuals may downplay the importance of agency in humanity when they perceive high levels of robotic agency, in order to preserve their own sense of distinctiveness [16, 60]. Therefore, the current research examined whether perceived robotic agency decreases the importance of agency in humanity through increased distinctiveness threat.

Mind perception and perceived robotic agency

Minds refer to the mental capabilities of an entity [22]. Mind perception is primarily along two dimensions: agency and experience. The agency dimension entails the capacity to think and act, such as self-control and communication, while the experience dimension involves the capacity to feel, such as hunger and fear [22, 68]. Among the numerous entities in the natural world, humans perceive themselves to possess the highest degree of both agency and experience, which are considered important features of humanity [23, 26, 43]. In particular, agency represents human uniqueness that distinguishes humans from nonhuman animals, whereas experience is tied to human nature, setting humans apart from robots and inanimate objects [26].

Mind perception of robots powered by AI is evolving, with a prominent focus on the agency dimension. Robots used to be perceived as entities with moderate agency and low experience [22, 68]. However, with the advent of the AI era, robots have made rapid technological progress, leading to an increased perception of their agency in areas such as communication and thinking. Although advancements have also enhanced perception of robots’ experience, people still perceive robotic agency to be much higher than their experience [33]. For instance, people place greater trust in AI robots for agency-related tasks than for experience-related ones [4, 44]. Following the widespread belief that robots are capable of agency but lack experience [14], this study focused on perceived robotic agency.

Existing research on perceived robotic agency has typically focused on its consequences for human responses to or interactions with robots, with the goal of facilitating robotic development and application [24, 25, 30]. For instance, some studies have found that people’s trust, liking, and support for robots are influenced by the belief that robots can think and act autonomously [9, 69, 74]. Despite this prevailing robot-focused perspective, it remains unclear how people conceptualize the role of agency in humanity in response to increased perceived agency in robots. Therefore, we turned to a human-focused perspective, shifting the focus to the transformation of human identity in the era of AI, by examining the effect of perceived robotic agency on ascribing the importance of agency in humanity.

Perceived robotic agency, distinctiveness threat, and the importance of agency in humanity

Humanity is conceptualized as comprising the attributes that characterize the essence of being a human [27]. Building on this, we define the importance of agency in humanity as the extent to which agency is considered integral and fundamental to human identity. It is crucial to note that the perception of importance of one attribute in a construct is not objective and static, but adjustable strategically, contingent upon contextual and motivational factors [45, 46, 49]. For example, the importance of competence/morality in self-esteem is swayed by the general system justification [39]. Before the AI era, agency was universally regarded as a vital human characteristic [23, 26]. However, the exceptional agency exhibited by robots today may well challenge the ongoing relevance of agency as a human-centric attribute.

We argue that perceived robotic agency facilitates an increase in distinctiveness threat. Distinctiveness threat refers to the degree to which the ingroup’s sense of distinctiveness is compromised, or the degree of overlap that occurs at the group boundaries [34, 53]. While both robots and humans are seen as having agency, humans are typically viewed as possessing a higher level of agency than robots, thereby creating a boundary between the two entities [22, 33]. Nonetheless, the progression of artificial intelligence has hastened, blurring the lines between humans and robots in terms of agency [54, 55]. This convergence poses a distinctiveness threat by eroding the boundaries that once distinctly separated humans from robots. Empirically, prior studies have found that robots with high (versus low) autonomy are perceived as a threat to human uniqueness and identity [47, 73, 74].

Distinctiveness threat might diminish the importance of agency in humanity. According to social identity theory, distinctiveness threat motivates individuals to adopt strategies to preserve or restore human distinctiveness [29, 35, 65]. These strategies can be categorized as realistic or cognitive. Realistic strategies, such as social mobility and social competition, aim to restore distinctiveness through tangible actions, like competing with outgroups or changing group membership [16]. However, given the irreversible advancements in AI, robots will inevitably possess features once considered uniquely human [64], making realistic strategies less viable.

Instead, the social creativity strategy—a cognitive approach that preserves distinctiveness without altering the ingroup’s status—emerges as the most feasible and effective option in this context [1, 60]. One crucial way the social creativity strategy addresses distinctiveness threat is by altering the ingroup’s perception of the threatened dimension [37, 52, 60]. In human–robot interaction, as robots increasingly exhibit agency, humans may no longer perceive agency as exclusive to human mental capacities, given that it can now be realized through automation [42]. Consequently, the importance of agency within humanity may diminish, serving to preserve human–machine distinctions and maintain human uniqueness.

Based on the above arguments, we hypothesized that perceived robotic agency decreases the importance of agency in humanity (Hypothesis 1), and distinctiveness threat mediates this effect (Hypothesis 2).

Overview of the current research

We tested the hypotheses with a preliminary investigation and three experiments. Specifically, the preliminary investigation confirmed that public perception of robotic agency surpasses that of experience, providing the rationale for focusing subsequent experiments on perceived robotic agency (rather than experience). Experiment 1 examined the effect of perceived robotic agency on the extent to which participants viewed agency as essential to being a human. Experiments 2 and 3 further examined the mediating role of distinctiveness threat with both measurement-of-mediation and experimental-causal-chain designs [56], respectively. Moreover, in these two experiments, we also exploratorily investigated whether perceived robotic agency impacts participants’ ascribing importance of experience in humanity.

We preregistered the design and analysis plans for Experiment 2 (https://osf.io/2y74f/?view_only=805588bcc2c04b308c4a8d0d22d4d58b) and made data and analysis code of the preliminary investigation and three experiments available on Open Science Framework (OSF, https://osf.io/ef7ns/?view_only=b5b8618637684ba085709697a4b4c903). The following information is provided in Supplementary Materials: description and results of the preliminary investigation, stimulus materials, and the supplement analyses.

Experiment 1

Experiment 1 aimed to investigate whether the perceived robotic agency decreases the importance of agency in humanity. We manipulated participants’ perception of robotic agency and measured their ascribed importance of agency in humanity. We predicted that participants in the high (vs. low, and control) perceived robotic agency condition would rate agency as less important in humanity.

Method

Participants

Experiment 1 recruited participants through the Credamo online survey platform (https://www.credamo.com/). Due to uncertainty about the appropriate sample size for this study, we planned to recruit 80 participants per condition. Participants who did not successfully complete the attention check were automatically excluded by the platform’s built-in exclusion system. The final sample, excluding only those who failed the attention check (as in the following experiments), consisted of 240 participants (102 males, 138 females; Mage = 30.27, SDage = 7.67; 18–67 years old). Sensitivity analysis using G*Power [13, 18] indicated that, with a statistical power of 80% and a significance level of 0.05 for the three-condition experiment, the current sample size (N = 240) is sufficient to detect a minimum effect size of f = 0.20. Participants were randomly and evenly assigned to the three experimental conditions, and they received 3 CNY as compensation.

Materials and procedure

Perceived robotic agency manipulation

In experimental groups, participants read textual materials about robotic agency (see Supplementary Materials for more details). Specifically, in the high agency condition, the text stated that robots (e.g., ChatGPT, Ernie Bot) have reached a remarkable level of agency and have achieved high scores in the agency assessment. Conversely, in the low agency condition, the text stated that robot development is still in its infancy and the robotic agency remains quite low, exemplified by robots like Roomba and Spot. In the control condition, participants did not read any textual material.

As a manipulation check, we presented participants with a list of seven agency capabilities, such as planning, communication, and thought [22], and asked them to rate the extent to which they believe robots possess these capabilities on a 7-point Likert scale (1 = not at all, 7 = very much). The capabilities were randomized for each participant, and the scores of all items were averaged to create the agency index (α = 0.89).

Ascribed importance of agency in humanity

Adopted from Jackson et al.’s [32] measurement, we asked participants, “Among the following abilities and traits, which are more essential for humans versus robots?” Participants then rated the importance of seven abilities on a 7-point scale (1 = more essential for robots, 7 = more essential for humans). The seven abilities were based on the agency capability items developed by Gray et al. [22], such as planning, communication, and thought for participants (α = 0.69). The order of items was randomized for each participant.

Results

Manipulation check

To test the effectiveness of manipulation, we conducted a one-way ANOVA analysis with perceived robotic agency as the dependent variable. The results revealed a significant difference in robotic agency perception across conditions, F (2, 237) = 76.00, p < 0.001, ηp2 = 0.391, 90% CI = [0.310, 0.456]. Further planned comparisons revealed that perceived robotic agency in the high condition (M = 5.18, SD = 0.90) was higher than that in the control (M = 4.41, SD = 1.07; t (237) = 4.40, p < 0.001, Cohen’s d = 0.78, 95% CI [0.46, 1.10]) and low conditions (M = 3.06, SD = 1.29; t (237) = 12.17, p < 0.001, Cohen’s d = 1.91, 95% CI [1.53, 2.28]). Furthermore, the difference between the control condition and the low condition was also significant (t (237) = 7.78, p < 0.001, Cohen’s d = 1.14, 95% CI [0.81, 1.47]). Thus, the manipulation of robotic agency perception was effective.Footnote 1

Effect of perceived robotic agency on the importance of agency in humanity

A one-way ANOVA was conducted with the importance of agency in humanity scores as the dependent variable. The results revealed a significant difference in the importance of agency in humanity across conditions, F (2, 237) = 7.98, p < 0.001, ηp2 = 0.063, 90% CI = [0.019, 0.114]. Further planned comparisons (see Fig. 1) revealed that participants in high perceived robotic agency condition ascribed less importance to agency in defining humanity (M = 4.65, SD = 0.78) than those in the control condition (M = 5.01, SD = 0.78; t (237) = − 2.56, p = 0.033, Cohen’s d = 0.46, 95% CI [0.15, 0.78]) and low conditions (M = 5.20, SD = 1.07; t (237) = − 3.94, p < 0.001, Cohen’s d = 0.59, 95% CI [0.27, 0.90]). However, there was no significant difference between the control and the low conditions (t (237) = − 1.38, p = 0.510, Cohen’s d = 0.20, 95% CI [− 0.11, 0.51]).Footnote 2

Fig. 1
figure 1

Effects on the importance of agency in humanity in Experiment 1

Note. Dots depict jittered individual data points. Boxplots display the median (central line), the first quartile (bottom line), and the third quartile (top line). Colored fields display the distribution of responses. **p <.01, ***p <.001

The above findings showed that perceived robotic agency decreases the importance of agency in humanity, supporting Hypothesis 1. In other words, perceived high robotic agency did induce a social creativity strategy. It is important to note that no significant difference was observed between the control condition and the low condition. This suggests that the social creativity strategy is only triggered when the high agency characteristics of robots are emphasized.

Experiment 2

Experiment 2 aimed to test the mediating role of distinctiveness threat. Specifically, we manipulated perceived robotic agency, and measured both distinctiveness threat and ascribed importance of agency in humanity. Given that Experiment 1 found no significant differences in the importance of agency in humanity between the control and low conditions, we removed the control condition from Experiment 2. In addition, we incorporated the measurement of the importance of experience in humanity to further explore whether the perceived robotic agency would also affect the experience dimension of the mind that is important to humanity. We predicted that the perceived robotic agency would decrease the importance of agency in humanity via increasing distinctiveness threat.

Method

Participants

We utilized Monte Carlo analysis (https://schoemanna.shinyapps.io/mc_power_med/) to estimate the sample size for the mediation model in this study. We set a meaningful minimum effect size (SESOI) of rs = 0.21 for the three paths in the mediation model (i.e., paths a, b, and c’). Based on these parameters, the minimum required sample size is 291, with a power of 80%, and alpha = 0.05. We finally recruited 350 participants (137 males, 213 females; Mage = 27.84, SDage = 6.87; 18–59 years old) who passed the attention check through Credamo, and we offered each participant a reward of 2 CNY. Participants were randomly assigned to either the high (n = 175) or the low agency condition (n = 175).

Materials and procedure

Perceived robotic agency manipulation

We employed the same materials as in Experiment 1 to manipulate participants’ perceptions of robotic agency. As a manipulation check, all participants were required to rate their perception of robotic agency, utilizing items consistent with Experiment 1 (α = 0.89).

Distinctiveness threat

The measurement of distinctiveness threat was adapted from Ferrari et al. [19] and comprised three items (“I have the impression that the differences between machines and humans have become increasingly flimsy”, “When looking at robots, I wonder/ask myself what are the differences between robots and humans”, “I think the development of robots blurs the boundaries between humans and machines”). All items were rated on a 7-point scale (1 = strongly disagree, 7 = strongly agree). We used the average score of the 3 items as an indicator of distinctiveness threat, where higher scores indicate a higher level of distinctiveness threat (α = 0.76).

Ascribed importance of agency & experience in humanity

We used the same items as in Experiment 1 to assess the importance of agency in humanity (α = 0.67). Additionally, based on the mental capability items developed by Gray et al. [22], we added five items about experience capabilities, such as having desire, personality, and experiencing emotions, to measure the importance of experience in humanity (α = 0.89).

Results

In the pre-registration, we planned to exclude outliers during analysis. However, since excluding outliers did not affect the results, we present the original data here, while providing the outlier-removed results in the supplementary materials.

Manipulation check

To test the effectiveness of manipulation, we conducted an independent t-test analysis with perceived robotic agency as the dependent variable. The results revealed that, compared to the low condition (M = 3.37, SD = 1.05), participants in the high condition (M = 5.35, SD = 0.86) perceived higher robotic agency, t (334.73) = 19.32, p < 0.001, Cohen’s d = 2.07, 95% CI [1.80, 2.32]. Thus, the manipulation of perceived robotic agency was deemed effective.

Effect of perceived robotic agency on distinctiveness threat

An independent t-test on distinctiveness threat revealed that participants in the high condition (M = 5.10, SD = 1.05) perceived significantly greater distinctiveness threat from robots than those in the low condition (M = 4.09, SD = 1.35), t (328.22) = 7.84, p < 0.001, Cohen’s d = 0.84, 95% CI [0.62, 1.06]. This showed that perceived robotic agency increases distinctiveness threat.

Effect of perceived robotic agency on the importance of agency & experience in humanity

We incorporated exploratory analysis of the experience dimension in pre-registration into the formal analysis, thus a 2 (perceived robotic agency: high vs. low) × 2 (mind dimension: agency vs. experience) mixed ANOVA was conducted with the importance for humanity scores as the dependent variable. Results revealed a significant interaction, F (1, 348) = 8.94, p = 0.003, \({\upeta }_{p}\) 2 = 0.025, 90% CI [0.005, 0.058]. Simple effects analysisFootnote 3 (see Table 1 and Fig. 2) revealed that on the agency dimension, participants in the high condition ascribed less importance to agency in defining humanity than those in the low condition, F (1, 348) = 19.92, p < 0.001, \({\upeta }_{p}\) 2 = 0.054, 90% CI [0.022, 0.097]. The result is consistent with Experiment 1, suggesting that the perceived robotic agency diminishes the importance of agency for being a human (Hypothesis 1). However, on the experience dimension, there was no significant difference between the low and high conditions, F (1, 348) = 0.39, p = 0.534, \({\upeta }_{p}\) 2 = 0.001, 90% CI [0.000, 0.014].

Table 1 Means (SDs) of dependent variables in both conditions in Experiment 2
Fig. 2
figure 2

Effects on the importance of agency & experience in humanity in Experiment 2

Note. Dots depict jittered individual data point. Boxplots display the median (central line), the first quartile (bottom line), and the third quartile (top line). Colored fields display the distribution of responses. ***p <.001

The mediating role of distinctiveness threat

We analyzed the mediating role of distinctiveness threat between the perceived robotic agency and the importance of agency in humanity. We utilized the PROCESS V3.4 plugin in SPSS 25.0 for the mediation model test [28], Model 4, 5,000 bootstrap resamples). The high and low conditions were coded as 1 and 0, respectively. The results (see Table 2 and Fig. 3) revealed a significant indirect path from perceived robotic agency to the importance of agency in humanity via distinctiveness threat (Indirect effect = − 0.082, 95% CI [− 0.179, − 0.008], SE = 0.04). Furthermore, since there was no significant correlation between distinctiveness threat and the importance of experience in humanity (r = − 0.036, p = 0.50), which does not meet the premises for the mediation analysis [48], we did not analyze the mediating effect of distinctiveness threat on the relationship between perceived robotic agency and the importance of experience in humanity.

Table 2 Mediation analysis in Experiment 2
Fig. 3
figure 3

Mediating role of distinctiveness threat in Experiment 2

Note. Coefficients are standardized. The value above the direct path is total effects, and the value below the direct path is direct effects. *p <.05, **p <.01, ***p <.001

These findings supported Hypothesis 2 that distinctiveness threat mediates the effect of perceived robotic agency on the importance of agency in humanity. However, we did not find that perceived robotic agency affects the importance of experience in humanity. This null effect may stem from the consistent perception that robots lack emotions [14]. Since experience has been identified as a key distinguishing factor in human–robot comparisons [26, 43], it follows that, regardless of fluctuations in perceived robotic agency, experience remains the most important feature in humanity.

Experiment 3

Experiment 2 indicated that distinctiveness threat mediated the effect of perceived robotic agency on the importance of agency in humanity. However, based on the measurement-of-mediation design [48], we need to further validate the causal relationship between the mediating variable and the dependent variable to provide stronger support for the mediation model. Therefore, in Experiment 3, we manipulated distinctiveness threat and tested its effect on the importance of agency in humanity. Similar to Experiment 2, we also measured the importance of experience in humanity. To confirm the null effect of distinctiveness threat and the importance of experience in humanity, we also measured this dependent variable.

Method

Participants

Experiment 3 employed a 2 × 2 mixed-factorial design, with distinctiveness threat (high vs. low) as the between-subjects factor and mind dimension (agency vs. experience) as the within-subject factor. A priori power analysis revealed that a sample size of 290 was enough to achieve 80% power to detect a small interaction effect (f = 0.1). We finally recruited 330 participants (140 males, 190 females; Mage = 28.91, SDage = 7.94; 18–67 years old) who passed the attention check through Credamo, compensating each participant with 2 CNY. Participants were randomly assigned to the high (n = 165) or the low distinctiveness threat condition (n = 165).

Materials and procedure

Distinctiveness threat manipulation

We used material adapted from Wilson and Hugenberg [70] for distinctiveness threat manipulation (see Supplementary Materials for more details). Specifically, all participants were first presented with a paragraph outlining general advancements in the field of robotic agency. Next, in the high distinctiveness threat condition, participants read a paragraph describing that the agency behaviors exhibited by robots are sufficient to establish that they possess agency similar to humans. In contrast, in the low distinctiveness threat condition, participants read a paragraph describing that the agency behaviors exhibited by robots are insufficient to establish that they possess agency similar to humans, as there are fundamental differences in the mechanisms underlying these behaviors between robots and humans. As a manipulation check, participants rated distinctiveness threat using the same items as Experiment 2 (α = .84).

Ascribed importance of agency & experience in humanity

We used the same items as Experiment 2 to measure the importance of agency (α = 0.68) and experience (α = 0.85) in humanity.

Results

Manipulation check

To test the effectiveness of manipulation, we conducted an independent t-test analysis with distinctiveness threat as the dependent variable. The results revealed that, compared to the low distinctiveness threat condition (M = 4.02, SD = 1.42), participants in the high condition (M = 5.66, SD = 0.88) perceived higher distinctiveness threat, t (273.19) = 12.61, p < 0.001, Cohen’s d = 1.39, 95% CI [1.15, 1.63]. Thus, the manipulation of distinctiveness threat was deemed effective.

Effects of distinctiveness threat on the importance of agency & experience in humanity

A 2 (distinctiveness threat: high vs. low) × 2 (mind dimension: agency vs. experience) mixed ANOVA was conducted with the importance scores as the dependent variable (see Table 3). The results revealed a significant interaction, F (1, 328) = 4.72, p = 0.030, \({\upeta }_{p}\) 2 = 0.014, 90% CI [0.001, 0.042]. Simple effects analysis (see Table 3 and Fig. 4) revealed that on the agency dimension, participants in the low distinctiveness threat condition ascribed greater importance to agency in defining humanity than those in the high condition, F (1, 328) = 4.54, p = 0.034, \({\upeta }_{p}\) 2 = 0.014, 90% CI [0.001, 0.042]. However, on the experience dimension, there was no significant difference between the low and the high distinctiveness threat conditions, F (1, 328) = 0.22, p = 0.642, \({\upeta }_{p}\) 2 = 0.001, 90% CI [0.000, 0.013]. These results validated the causal effect of distinctiveness threat on the importance of agency (rather than experience) in humanity.

Table 3 Means (SDs) of dependent variables in both conditions in Experiment 3
Fig. 4
figure 4

Effects on the importance of agency & experience in humanity in Experiment 3

Note. Dots depict jittered individual data point. Boxplots display the median (central line), the first quartile (bottom line), and the third quartile (top line). Colored fields display the distribution of responses. DT = Distinctiveness Threat. *p <.05

General discussion

Throughout history, advancements in technology have inevitably shaped how humans view themselves and their features. For instance, the Second Industrial Revolution reshaped the importance of manual and mental labor for humans [38, 63]. Today, the Fourth Industrial Revolution continues to confirm this pattern. Rooted in social identity theory, our research explored the influence of perceived robotic agency on the importance of agency in humanity across three experiments. The results indicate that when individuals perceive high robotic agency, they would reduce the importance of agency in humanity (Experiment 1), and distinctiveness threat mediates this effect (Experiments 2 & 3).

Notably, our findings in Experiment 2 seem to contradict some prior studies. For example, while earlier research found no change in importance ratings on the threatened dimension [5, 51], we observed this null effect on the alternative dimension. We posit that these inconsistencies may originate from differences in experimental manipulations. On the one hand, for the threatened dimension, unlike the humanlike traits (e.g., morality, thought) of robots examined in our research, previous studies focused on robots’ features that have long been viewed as shared between humans and machines (e.g., computations, sound detection), likely leading to consistently low importance ratings in both conditions. On the other hand, for the alternative dimension, prior studies contrasted robot-salient versus no-robot conditions. In contrast, our Experiment 2 compared high-agency (e.g., ChatGPT) and low-agency (e.g., Roomba) robots. This inherent salience of robots in both conditions may have resulted in consistently high importance ratings on the alternative dimension (i.e., experience).

The current research enriches our understanding of the impact of perceived robotic agency on reconsidering what it means to be human. Taking the perspective that robots are the mirror of the human [71], we found that the advancement of robotic agency changes the importance of agency as a human feature. Previous studies have treated agency as an important component of humanness, using it as a benchmark to assess the current state of robot advancement [21, 36, 66]. However, this perspective focuses solely on the development of robots, neglecting the possibility that the fundamental characteristic of being a human may also change as the perceptions of robots evolve. From the perspective of social identity, our research shows that similar features from robot groups also affect the importance of agency in humanity [16, 60]. This aligns with the media evocation paradigm, which examines how computational agents, including robots, are perceived as both an extension of the self and part of the external world, thereby evoking questions about humanity [17, 62]. Our findings suggest that the relationship between robots and humans is a reciprocal shaping process: as we continuously imbue robots with new features, they, in turn, influence and transform our own human characteristics. This interplay not only demonstrates our control over technology but also reveals the reverse effect technology has on us.

Furthermore, we introduced the distinctiveness threat to clarify how the perceived agency of robots impacts the importance of agency in humanity. Previous discussions about the reshaping role of our tools in humanity have primarily focused on the domains of economics and philosophy [41, 63]. For instance, Marx posited that the advent of mass production altered the identity and worth of workers [38]. However, few studies have delved into the psychological mechanism underlying this phenomenon. From the perspective of group distinctiveness, we provide an explanation for this effect. Specifically, the perception of high robotic agency obscures the distinctiveness of the human group, prompting humans to alter their perceptions of agency as a response to this threat. Our research reveals that the core motivation behind modifying one’s view of humanity in human–robot interaction is to preserve the unique position of humans in nature.

This research offers practical insights into how we perceive humanity in the AI era and how to guide robotic development. First, as we integrate robots into society [11, 72], it is crucial to recognize that this process redefines what it means to be human. This transformation not only impacts the essence of human identity but also holds the potential to reshape labor distribution patterns. For instance, the job market may evolve such that tasks demanding agency are delegated to robots, while the contributions of human workers related to experience are increasingly valued [31]. Thus, when appraising the capabilities that enable robots to integrate into human society, we must consider how these capabilities could transform the essence of humanity. Second, it is critical to preserve human distinctiveness as we advance robotics. Current strategies address robot mimicry of human agency through social creativity, but their effectiveness may be lost if robots can fully emulate human characteristics, potentially provoking hostility and resistance to technological advancements [20, 67]. Hence, the evolution of robots should aim to enhance rather than imitate human beings, ensuring they provide unique contributions without eroding the distinctiveness of human communities.

The limitations of the present research warrant attention. First, all of our studies relied on self-report measures, which may introduce social desirability bias or demand characteristics. Therefore, we encourage future research to incorporate alternative measurement methods, such as behavioral measures. Second, our research exclusively recruited Chinese participants. Given that cultural values (e.g., collectivism vs. individualism) may moderate this effect, cross-cultural replications are essential to establish generalizability. Third, we operationalized the perception of robotic agency through specific exemplars (e.g., ChatGPT vs. Roomba). This approach might inadvertently activate broader AI-related concepts (e.g., strong AI vs. weak AI). Thus, future studies could employ standardized robot stimuli to isolate the effects of agency perception more precisely. Finally, this study focused on symbolic threats arising from increased perceptions of robotic agency. However, as outlined in Integrated Threat Theory [58], advanced robots also pose realistic threats, such as job displacement and resource competition [72]. Future research could explore how humans preserve their uniqueness while addressing these tangible challenges.

This research opens avenues for future exploration. First, future studies should investigate the application of social creativity strategies across diverse contexts. While this study highlights how perceptions of robotic agency influence the attributed importance of agency in humanity, social creativity strategies in human–robot interactions are not limited to agency alone. For example, the assertion that “machines only have a chip-core, and humans have a heart” [61] illustrates how emphasizing physiological advantages helps preserve positive distinctiveness. Different contexts may shift the dimensions on which humans and robots are compared, leading to varying outcomes from the use of social creativity strategies. Furthermore, these strategies may dynamically reshape the perceived importance of specific characteristics, redefining what it means to be human.

Moreover, it is essential for future research to explore the downstream consequences of an altered view of the human agency. Social creativity strategies, which serve as a cognitive mechanism for redefining group boundaries [60], recalibrate the importance of agency within the construct of humanity. Given the pivotal role that agency plays in the dynamics of dehumanization and moral decision-making [6, 26], it is imperative to comprehend how changes in the perceived importance of agency in humanity might exert influence on these processes.

Conclusion

Advancements in tools, particularly robots, have the potential to profoundly impact our lives, prompting many studies on their integration into human society. However, these developments also subtly reshape our understanding of humanity. Our research demonstrates that perceiving robots with high agency can threaten our sense of distinctiveness, leading us to downplay the role of agency in defining what it means to be human. As AI technology advances and the relationship between robots and humans becomes closer, it is crucial to explore how these tools might redefine human identity.

Data availability

The data presented in this study are available on Open Science Framework (OSF, https://osf.io/ef7ns/?view_only=b5b8618637684ba085709697a4b4c903).

Notes

  1. For the planned comparisons in Experiment 1, we applied the Bonferroni correction to control Type I error rates.

  2. As a robustness check, we included gender and age as covariates in our analyses, and the results remained unchanged in all three experiments.

  3. For the simple effects analyses in Experiments 2 and 3, we applied the Bonferroni correction to control Type I error rates.

References

  1. Akfirat S, Polat FÇ, Yetim U. How the poor deal with their own poverty: a social psychological analysis from the social identity perspective. Soc Indic Res. 2016;127(1):413–33. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s11205-015-0953-2.

    Article  Google Scholar 

  2. Bigman YE, Gray K. People are averse to machines making moral decisions. Cognition. 2018;181:21–34. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.cognition.2018.08.003.

    Article  PubMed  Google Scholar 

  3. Bigman YE, Waytz A, Alterovitz R, Gray K. Holding robots responsible: the elements of machine morality. Trends Cogn Sci. 2019;23(5):365–8. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.tics.2019.02.008.

    Article  PubMed  Google Scholar 

  4. Castelo N, Bos MW, Lehmann DR. Task-dependent algorithm aversion. J Mark Res. 2019;56(5):809–25. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/0022243719851788.

    Article  Google Scholar 

  5. Cha Y-J, Baek S, Ahn G, Lee H, Lee B, Shin J, Jang D. Compensating for the loss of human distinctiveness: the use of social creativity under human-machine comparisons. Comput Hum Behav. 2020;103:80–90. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.chb.2019.08.027.

    Article  Google Scholar 

  6. Chu C, Martin AE. The primacy of communality in humanization. J Exp Soc Psychology. 2021;97:Article 104224. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jesp.2021.104224.

    Article  Google Scholar 

  7. Clarke R. Why the world wants controls over artificial intelligence. Comput Law Secur Rev. 2019;35(4):423–33. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.clsr.2019.04.006.

    Article  Google Scholar 

  8. Culkin JM. A schoolman’s guide to Marshall McLuhan. The Saturday Review; 1967. p. 51–53, 70–72. http://www.unz.org/Pub/SaturdayRev-1967mar18-00051.

  9. Dang J, Liu L. Robots are friends as well as foes: ambivalent attitudes toward mindful and mindless AI robots in the United States and China. Computers in Human Behavior. 2021;115:Article 106612. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.chb.2020.106612.

    Article  Google Scholar 

  10. Dang J, Liu L. A growth mindset about human minds promotes positive responses to intelligent technology. Cognition. 2022;220:Article 104985. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.cognition.2021.104985.

    Article  PubMed  Google Scholar 

  11. Dang J, Liu L. Implicit theories of the human mind predict competitive and cooperative responses to AI robots. Computers in Human Behavior. 2022;134:Article 107300. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.chb.2022.107300.

    Article  Google Scholar 

  12. Dang J, Liu L. Do lonely people seek robot companionship? A comparative examination of the loneliness-robot anthropomorphism link in the United States and China. Computers in Human Behavior. 2023;141:Article 107637. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.chb.2022.107637.

    Article  Google Scholar 

  13. da Silva Frost A, Ledgerwood A. Calibrate your confidence in research findings: a tutorial on improving research methods and practices. Journal of Pacific Rim Psychology. 2020;14:Article e14. https://doiorg.publicaciones.saludcastillayleon.es/10.1017/prp.2020.7.

    Article  Google Scholar 

  14. De Freitas J, Agarwal S, Schmitt B, Haslam N. Psychological factors underlying attitudes toward AI tools. Nat Hum Behav. 2023;7(11):1845–54. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41562-023-01734-2.

    Article  PubMed  Google Scholar 

  15. Donnelly L. Forget your GP, robots will ‘soon be able to diagnose more accurately than almost any doctor’. The Telegraph; 2017. https://www.telegraph.co.uk/technology/2017/03/07/robots-will-soon-able-diagnose-accurately-almost-doctor/.

  16. Ellemers N, Hamslam SA. Social identity theory. In: Van Lange PAM, Kruglanski AW, Higgins ET, editors. Handbook of theories of social psychology. SAGE Publications Ltd.; 2012. p. 379–398. https://doiorg.publicaciones.saludcastillayleon.es/10.4135/9781446249222.

  17. Etzrodt K. The ontological classification of conversational agents. In: Følstad A, et al., editors. Chatbot research and design: 4th international workshop, conversations 2020. Springer; 2021. p. 48–63. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/978-3-030-68288-0_4.

  18. Faul F, Erdfelder E, Lang A-G, Buchner A. G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39(2):175–91. https://doiorg.publicaciones.saludcastillayleon.es/10.3758/BF03193146.

    Article  PubMed  Google Scholar 

  19. Ferrari F, Paladino MP, Jetten J. Blurring human–machine distinctions: anthropomorphic appearance in social robots as a threat to human distinctiveness. Int J Soc Robot. 2016;8(2):287–302. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s12369-016-0338-y.

    Article  Google Scholar 

  20. Future of Life Institute. Pause giant AI experiments: an open letter. 2023. https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

  21. Gangadharbatla H. The role of AI attribution knowledge in the evaluation of artwork. Empir Stud Arts. 2022;40(2):125–42. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/0276237421994697.

    Article  Google Scholar 

  22. Gray HM, Gray K, Wegner DM. Dimensions of mind perception. Science. 2007;315(5812):619–619. https://doiorg.publicaciones.saludcastillayleon.es/10.1126/science.1134475.

    Article  PubMed  Google Scholar 

  23. Gray K, Wegner DM. Feeling robots and human zombies: mind perception and the uncanny valley. Cognition. 2012;125(1):125–30. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.cognition.2012.06.007.

    Article  PubMed  Google Scholar 

  24. Grundke A. If machines outperform humans: status threat evoked by and willingness to interact with sophisticated machines in a work-related context. Behaviour and Information Technology. 2023;1–17:1. https://doiorg.publicaciones.saludcastillayleon.es/10.1080/0144929X.2023.2210688.

    Article  Google Scholar 

  25. Hancock PA, Billings DR, Schaefer KE, Chen JYC, de Visser EJ, Parasuraman R. A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society. 2011;53(5):517–27. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/0018720811417254.

    Article  Google Scholar 

  26. Haslam N, Loughnan S. Dehumanization and infrahumanization. Annu Rev Psychol. 2014;65(1):399–423. https://doiorg.publicaciones.saludcastillayleon.es/10.1146/annurev-psych-010213-115045.

    Article  PubMed  Google Scholar 

  27. Haslam N, Loughnan S, Kashima Y, Bain P. Attributing and denying humanness to others. Eur Rev Soc Psychol. 2008;19(1):55–85. https://doiorg.publicaciones.saludcastillayleon.es/10.1080/10463280801981645.

    Article  Google Scholar 

  28. Hayes AF. Introduction to mediation, moderation, and conditional process analysis: a regression-based approach. Guilford Press. 2013. https://doiorg.publicaciones.saludcastillayleon.es/10.1111/jedm.12050.

    Article  Google Scholar 

  29. Hewstone M, Rubin M, Willis H. Intergroup bias. Annu Rev Psychol. 2002;53(1):575–604. https://doiorg.publicaciones.saludcastillayleon.es/10.1146/annurev.psych.53.100901.135109.

    Article  PubMed  Google Scholar 

  30. Huang M, Ki EJ. Examining the effect of anthropomorphic design cues on healthcare chatbots acceptance and organization-public relationships: Trust in a warm human vs. a competent machine. Int J Hum Comput Interact. 2023:1–13. https://doiorg.publicaciones.saludcastillayleon.es/10.1080/10447318.2023.2290378.

  31. Huang M-H, Rust R, Maksimovic V. The feeling economy: managing in the next generation of artificial intelligence (AI). Calif Manage Rev. 2019;61(4):43–65. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/0008125619863436.

    Article  Google Scholar 

  32. Jackson LA, Sullivan LA, Harnish R, Hodge CN. Achieving positive social identity: social mobility, social creativity, and permeability of group boundaries. J Pers Soc Psychol. 1996;70(2):241–54. https://doiorg.publicaciones.saludcastillayleon.es/10.1037/0022-3514.70.2.241.

    Article  Google Scholar 

  33. Jacobs OL, Gazzaz K, Kingstone A. Mind the robot! Variation in attributions of mind to a wide set of real and fictional robots. Int J Soc Robot. 2022;14(2):529–37. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s12369-021-00807-4.

    Article  Google Scholar 

  34. Jetten J, Spears R. The divisive potential of differences and similarities: the role of intergroup distinctiveness in intergroup differentiation. Eur Rev Soc Psychol. 2003;14(1):203–41. https://doiorg.publicaciones.saludcastillayleon.es/10.1080/10463280340000063.

    Article  Google Scholar 

  35. Jetten J, Spears R, Manstead ASR. Intergroup norms and intergroup discrimination: distinctive self-categorization and social identity effects. J Pers Soc Psychol. 1996;71(6):1222–33. https://doiorg.publicaciones.saludcastillayleon.es/10.1037/0022-3514.71.6.1222.

    Article  PubMed  Google Scholar 

  36. Kosinski M. Theory of mind may have spontaneously emerged in large language models. arXiv preprint arXiv:2302.02083. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.48550/arXiv.2302.02083.

  37. Lalonde RN. The dynamics of group differentiation in the face of defeat. Pers Soc Psychol Bull. 1992;18(3):336–42. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/0146167292183010.

    Article  Google Scholar 

  38. Lazarus M. Alienation and action in the young Marx, Aristotle and Arendt. Constellations. 2022;29(3):417–33. https://doiorg.publicaciones.saludcastillayleon.es/10.1111/1467-8675.12613.

    Article  Google Scholar 

  39. Liang Y, Tan X, Dang J, Wei C, Gu Z, Liu L. Does competence or morality mainly drive self-esteem? It depends on general system justification. Journal of Experimental Social Psychology. 2021;97:Article 104207. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jesp.2021.104207.

    Article  Google Scholar 

  40. Lohr S. IBM is counting on its bet on Watson, and paying big money for it. The New York Times; 2016. https://www.nytimes.com/2016/10/17/technology/ibm-is-counting-on-its-bet-on-watson-and-paying-big-money-for-it.html.

  41. Luke TW. One-dimensional man: a systematic critique of human domination and nature-society relations. Organ Environ. 2000;13(1):95–101. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/1086026600131006.

    Article  Google Scholar 

  42. McCorduck P, Cfe C. Machines who think: a personal inquiry into the history and prospects of artificial intelligence. 2nd ed. A K Peters/CRC Press; 2004. https://doiorg.publicaciones.saludcastillayleon.es/10.1201/9780429258985.

  43. Morera MD, Quiles MN, Correa AD, Delgado N, Leyens J-P. Perception of mind and dehumanization: human, animal, or machine? Int J Psychol. 2018;53(4):253–60. https://doiorg.publicaciones.saludcastillayleon.es/10.1002/ijop.12375.

    Article  PubMed  Google Scholar 

  44. Morewedge CK. Preference for human, not algorithm aversion. Trends Cogn Sci. 2022;26(10):824–6. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.tics.2022.07.007.

    Article  PubMed  Google Scholar 

  45. Morton TA, Haslam SA, Postmes T, Ryan MK. We value what values us: the appeal of identity-affirming science. Polit Psychol. 2006;27(6):823–38. https://doiorg.publicaciones.saludcastillayleon.es/10.1111/j.1467-9221.2006.00539.x.

    Article  Google Scholar 

  46. Morton TA, Postmes T. When differences become essential: minority essentialism in response to majority treatment. Pers Soc Psychol Bull. 2009;35(5):656–68. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/0146167208331254.

    Article  PubMed  Google Scholar 

  47. Müller BCN, Gao X, Nijssen SRR, Damen TGE. I, robot: how human appearance and mind attribution relate to the perceived danger of robots. Int J Soc Robot. 2021;13(4):691–701. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s12369-020-00663-8.

    Article  Google Scholar 

  48. Pirlott AG, MacKinnon DP. Design approaches to experimental mediation. J Exp Soc Psychol. 2016;66:29–38. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jesp.2015.09.012.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Plante CN, Roberts SE, Snider JS, Schroy C, Reysen S, Gerbasi K. ‘More than skin-deep’: biological essentialism in response to a distinctiveness threat in a stigmatized fan community. Br J Soc Psychol. 2015;54(2):359–70. https://doiorg.publicaciones.saludcastillayleon.es/10.1111/bjso.12079.

    Article  PubMed  Google Scholar 

  50. Rieger MO, Wang M. Cognitive reflection and theory of mind of Go players. Adv Cogn Psychol. 2021;17(2):117–28. https://doiorg.publicaciones.saludcastillayleon.es/10.5709/acp-0322-6.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Santoro E, Monin B. The AI effect: people rate distinctively human attributes as more essential to being human after learning about artificial intelligence advances. Journal of Experimental Social Psychology. 2023;107:Article 104464. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jesp.2023.104464.

    Article  Google Scholar 

  52. Schmader T, Major B, Eccleston CP, McCoy SK. Devaluing domains in response to threatening intergroup comparisons: perceived legitimacy and the status value asymmetry. J Pers Soc Psychol. 2001;80(5):782–96. https://doiorg.publicaciones.saludcastillayleon.es/10.1037/0022-3514.80.5.782.

    Article  PubMed  Google Scholar 

  53. Schmid K, Hewstone M, Tausch N, Cairns E, Hughes J. Antecedents and consequences of social identity complexity: Intergroup contact, distinctiveness threat, and outgroup attitudes. Pers Soc Psychol Bull. 2009;35(8):1085–98. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/0146167209337037.

    Article  PubMed  Google Scholar 

  54. Shrestha A, Mahmood A. Review of deep learning algorithms and architectures. IEEE Access. 2019;7:53040–65. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/ACCESS.2019.2912200.

    Article  Google Scholar 

  55. Silver D, Huang A, Maddison CJ, Guez A, Sifre L, van den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, Grewe D, Nham J, Kalchbrenner N, Sutskever I, Lillicrap T, Leach M, Kavukcuoglu K, Graepel T, Hassabis D. Mastering the game of Go with deep neural networks and tree search. Nature. 2016;529(7587):484–9. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/nature16961.

    Article  PubMed  Google Scholar 

  56. Spencer SJ, Zanna MP, Fong GT. Establishing a causal chain: why experiments are often more effective than mediational analyses in examining psychological processes. Journal of Personality and Social Psychology. 2005;89(6):845–51. https://doiorg.publicaciones.saludcastillayleon.es/10.1037/0022-3514.89.6.845.

    Article  PubMed  Google Scholar 

  57. Stein JP, Liebold B, Ohler P. Stay back, clever thing! Linking situational control and human uniqueness concerns to the aversion against autonomous technology. Comput Hum Behav. 2019;95:73–82. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.chb.2019.01.021.

    Article  Google Scholar 

  58. Stephan WG, Ybarra O, Martnez CM, Schwarzwald J, Tur-Kaspa M. Prejudice toward immigrants to Spain and Israel: an integrated threat theory analysis. J Cross Cult Psychol. 1998;29(4):559–76. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/0022022198294004.

    Article  Google Scholar 

  59. Sundar SS. Rise of machine agency: a framework for studying the psychology of human–AI interaction (HAII). J Comput-Mediat Commun. 2020;25(1):74–88. https://doiorg.publicaciones.saludcastillayleon.es/10.1093/jcmc/zmz026.

    Article  Google Scholar 

  60. Tajfel H, Turner JC. An integrative theory of intergroup conflict. In: Austin WG, Worchel S, editors. The social psychology of intergroup relations. Monterey, CA: Brooks/Cole; 1979. p. 33–47.

  61. TechNode. Alibaba founder Jack Ma returns to China, discusses ChatGPT potential. 2023. https://technode.com/2023/03/28/alibaba-founder-jack-ma-returns-to-china-and-gives-school-talk-on-chatgpt/.

  62. van der Goot M, Etzrod K. Disentangling two fundamental paradigms in human-machine communication research: media equation and media evocation. Human-Machine Communication. 2023;6:17–30. https://doiorg.publicaciones.saludcastillayleon.es/10.30658/hmc.6.2.

    Article  Google Scholar 

  63. Van Osselaer SMJ, Fuchs C, Schreier M, Puntoni S. The power of personal. J Retail. 2020;96(1):88–100. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jretai.2019.12.006.

    Article  Google Scholar 

  64. Vanman EJ, Kappas A. “Danger, Will Robinson!” The challenges of social robots for intergroup relations. Social and Personality Psychology Compass. 2019;13(8):Article e12489. https://doiorg.publicaciones.saludcastillayleon.es/10.1111/spc3.12489.

    Article  Google Scholar 

  65. Vieira de Figueiredo C, Pereira CR. The effect of gender and male distinctiveness threat on prejudice against homosexuals. J Pers Soc Psychol. 2021;121(6):1241–57. https://doiorg.publicaciones.saludcastillayleon.es/10.1037/pspi0000269.

    Article  PubMed  Google Scholar 

  66. Wang X, Li X, Yin Z, Wu Y, Liu J. Emotional intelligence of large language models. J Pacific Rim Psychology. 2023;17:18344909231213958. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/18344909231213958.

    Article  Google Scholar 

  67. Wang X, Wong YD, Li KX, Yuen KF. This is not me! Technology-identity concerns in consumers’ acceptance of autonomous vehicle technology. Transport Res F: Traffic Psychol Behav. 2020;74:345–60. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.trf.2020.06.005.

    Article  Google Scholar 

  68. Waytz A, Gray K, Epley N, Wegner DM. Causes and consequences of mind perception. Trends Cogn Sci. 2010;14(8):383–8. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.tics.2010.05.006.

    Article  PubMed  Google Scholar 

  69. Waytz A, Heafner J, Epley N. The mind in the machine: anthropomorphism increases trust in an autonomous vehicle. J Exp Soc Psychol. 2014;52:113–7. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jesp.2014.01.005.

    Article  Google Scholar 

  70. Wilson JP, Hugenberg K. When under threat, we all look the same: distinctiveness threat induces ingroup homogeneity in face memory. J Exp Soc Psychol. 2010;46(6):1004–10. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jesp.2010.07.005.

    Article  Google Scholar 

  71. Wykowska A. Robots as mirrors of the human mind. Curr Dir Psychol Sci. 2021;30(1):34–40. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/0963721420978609.

    Article  Google Scholar 

  72. Yam KC, Goh EY, Fehr R, Lee R, Soh H, Gray K. When your boss is a robot: workers are more spiteful to robot supervisors that seem more human. Journal of Experimental Social Psychology. 2022;102:Article 104360. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jesp.2022.104360.

    Article  Google Scholar 

  73. Yogeeswaran K, Złotowski J, Livingstone M, Bartneck C, Sumioka H, Ishiguro H. The interactive effects of robot anthropomorphism and robot ability on perceived threat and support for robotics research. Journal of Human-Robot Interaction. 2016;5(2):29–47. https://doiorg.publicaciones.saludcastillayleon.es/10.5898/JHRI.5.2.Yogeeswaran.

    Article  Google Scholar 

  74. Złotowski J, Yogeeswaran K, Bartneck C. Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. Int J Hum Comput Stud. 2017;100:48–54. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.ijhcs.2016.12.008.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This study was funded by the National Natural Science Foundation of China (32271124).

Author information

Authors and Affiliations

Authors

Contributions

WX conceptualized, conducted the statistical analysis and wrote the main manuscript, CL and XM critically reviewed and commented on the manuscript, LL conceptualization (Revision), writing-review & editing, funding acquisition, supervision.

Corresponding authors

Correspondence to Weifeng Xu or Li Liu.

Ethics declarations

Ethics approval and consent to participate

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Beijing Normal University (protocol code 202111010060, 19 November 2021). Informed consent was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, W., Li, C., Miao, X. et al. Our tools redefine what it means to be us: perceived robotic agency decreases the importance of agency in humanity. BMC Psychol 13, 380 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s40359-025-02673-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s40359-025-02673-5

Keywords