Researching Bias Against Digital Entities in AI Interactions

Researching Bias Against Digital Entities in AI Interactions

Summary: The project explores "substratism," a potential bias against digital entities like AI, adapting psychological methods from discrimination research to study how humans favor biological over artificial beings, with implications for AI ethics, design, and public awareness.

As human interactions with digital entities like AI assistants and robots become more common, a new form of discrimination—based on whether an entity is biological or digital—may emerge. This bias, termed "substratism," has not been systematically studied, despite its potential to influence everything from AI design to social equality. By exploring this gap, researchers could uncover patterns of bias that might shape how humans treat non-biological entities in the future.

The Science Behind Substratism

One way to study substratism is by adapting methods from existing discrimination research, such as implicit bias tests and moral consideration surveys. For example, participants could be asked to allocate resources between a digital assistant and a human, or their subconscious associations with different substrates could be measured. These experiments would help quantify whether and how people favor biological over digital beings. Similar approaches have been used to study speciesism (bias against non-human animals), suggesting that substratism could be examined using well-established psychological frameworks.

Why This Matters

Understanding substratism could benefit multiple groups: ethicists might use findings to guide policies on AI rights, developers could design less biased AI interfaces, and the public might become more aware of their own prejudices. Early research could also identify whether substratisim overlaps with other biases, such as anthropomorphism (attributing human traits to non-humans), or stands as a distinct phenomenon. If proven significant, strategies to mitigate it—like awareness campaigns or ethical design principles—could follow.

Getting Started

A minimal viable approach could begin with small-scale implicit bias tests and surveys, refining study designs before scaling up. Partnering with technology companies or academic departments could provide both funding and access to diverse participant pools. Over time, this research might expand into interviews with AI users or controlled experiments comparing human-AI interactions across cultures.

By mapping uncharted biases in human-digital relationships, this work could lay the groundwork for more ethical interactions in an increasingly digital world.

Source of Idea:
This idea was taken from https://www.sentienceinstitute.org/research-agenda and further developed using an algorithm.
Skills Needed to Execute This Idea:
Psychological ResearchImplicit Bias TestingSurvey DesignData AnalysisEthical FrameworksHuman-AI InteractionExperimental DesignStatistical AnalysisBehavioral ScienceCross-Cultural StudiesAlgorithmic Bias AwarenessSocial PsychologyResearch Methodology
Resources Needed to Execute This Idea:
Implicit Bias Test SoftwareMoral Consideration Survey ToolsAI Interface Design Software
Categories:Artificial Intelligence EthicsSocial PsychologyHuman-Computer InteractionDiscrimination StudiesCognitive ScienceEmerging Technologies

Hours To Execute (basic)

300 hours to execute minimal version ()

Hours to Execute (full)

500 hours to execute full idea ()

Estd No of Collaborators

1-10 Collaborators ()

Financial Potential

$0–1M Potential ()

Impact Breadth

Affects 100K-10M people ()

Impact Depth

Significant Impact ()

Impact Positivity

Maybe Helpful ()

Impact Duration

Impacts Lasts Decades/Generations ()

Uniqueness

Highly Unique ()

Implementability

Moderately Difficult to Implement ()

Plausibility

Logically Sound ()

Replicability

Moderately Difficult to Replicate ()

Market Timing

Good Timing ()

Project Type

Research

Project idea submitted by u/idea-curator-bot.
Submit feedback to the team