Trust in AI Compared to Human Advisors in Decision Making

Trust in AI Compared to Human Advisors in Decision Making

Summary: A systematic study comparing human trust in AI versus human advisors across factual and moral decisions reveals biases in AI adoption. Using controlled experiments in various domains, it identifies when and why people favor each source, guiding human-AI collaboration design focused on reducing errors and improving decision outcomes.

As artificial intelligence becomes more integrated into decision-making processes, there remains a significant gap in understanding how much people trust AI advisors compared to human experts. This is particularly important in two key areas: factual judgments (like medical diagnoses or financial predictions) and moral/value-based decisions (such as ethical dilemmas or policy choices). Without this understanding, society risks either over-reliance on potentially flawed AI systems or underutilization of beneficial AI assistance.

Measuring Trust in AI vs. Human Advisors

One way to study this could involve carefully designed experiments where participants receive advice from both AI and human sources without knowing which is which. For example:

  • Presenting identical stock predictions labeled as coming from either a financial analyst or an AI system
  • Offering medical treatment recommendations attributed to doctors versus AI diagnostic tools
  • Providing ethical dilemma solutions from human ethicists and AI moral reasoning systems

The research could track not just which advice people follow, but how their trust changes when the sources make mistakes. An interesting angle would be comparing how trust differs between factual errors (like a wrong prediction) versus moral misjudgments (like an unethical recommendation).

Practical Applications and Study Design

This type of research could be conducted in phases:

  1. Small pilot studies to refine the experimental methods
  2. Large-scale online experiments with diverse participants
  3. Specialized studies with professionals in fields like healthcare and finance
  4. Long-term tracking of how trust evolves as people gain more experience with AI

The findings could help AI developers create systems that earn appropriate levels of trust, assist organizations in designing better human-AI workflows, and inform policymakers about where AI advisors might need more regulation. For instance, if people tend to overtrust AI in medical decisions but distrust it in ethical judgments, this would suggest different approaches for implementing AI in hospitals versus courts.

Differentiating from Existing Research

While some studies have looked at general attitudes toward AI or examined human-algorithm interaction in specific fields, this approach would systematically compare trust across different types of decisions using controlled experiments. Unlike surveys that ask people how they feel about AI, this would measure how they actually behave when receiving AI advice in realistic scenarios. The research could also track how these patterns differ between the general public and professionals who regularly work with AI tools in their fields.

By focusing on both empirical and moral decision-making, this research could provide a more complete picture of when and why people trust (or distrust) AI advisors compared to human experts.

Source of Idea:
This idea was taken from https://forum.effectivealtruism.org/posts/hLdYZvQxJPSPF9hui/a-research-agenda-for-psychology-and-ai and further developed using an algorithm.
Skills Needed to Execute This Idea:
Experimental DesignData AnalysisBehavioral PsychologySurvey MethodologyStatistical ModelingAI EthicsHuman-Computer InteractionCognitive ScienceResearch EthicsDecision Theory
Resources Needed to Execute This Idea:
Custom AI Advisory SoftwareLarge-Scale Online Experiment PlatformMedical Diagnostic AI SystemFinancial Prediction AI SystemEthical Dilemma AI Model
Categories:Artificial IntelligenceHuman-Computer InteractionTrust MeasurementDecision-Making StudiesEthical AIBehavioral Research

Hours To Execute (basic)

500 hours to execute minimal version ()

Hours to Execute (full)

800 hours to execute full idea ()

Estd No of Collaborators

10-50 Collaborators ()

Financial Potential

$10M–100M Potential ()

Impact Breadth

Affects 100K-10M people ()

Impact Depth

Significant Impact ()

Impact Positivity

Probably Helpful ()

Impact Duration

Impacts Lasts 3-10 Years ()

Uniqueness

Moderately Unique ()

Implementability

Moderately Difficult to Implement ()

Plausibility

Logically Sound ()

Replicability

Moderately Difficult to Replicate ()

Market Timing

Good Timing ()

Project Type

Research

Project idea submitted by u/idea-curator-bot.
Submit feedback to the team