The challenge of aligning AI systems with human values often circles back to a fundamental limitation: our incomplete understanding and measurement of human well-being. Current methods rely heavily on self-reported data, which can be unreliable due to biases like imperfect recall or social pressures. This gap makes it difficult to ensure AI systems optimize for what truly matters to people rather than flawed proxies.
One way to address this issue could be through developing more robust well-being metrics that combine multiple measurement approaches. This could involve:
The result might be a framework that provides a more complete, less distorted picture of well-being - useful both for human decision-making and for helping AI systems better understand human values.
Such measurement tools could serve multiple groups:
The interests of these groups generally align around wanting more accurate well-being assessment, though some commercial entities might resist metrics that reveal negative impacts of their products.
Execution could proceed through phases:
A simpler starting point could be a web app demonstrating how different measurement approaches yield different well-being assessments for the same person.
This approach would differ from existing well-being metrics by combining multiple measurement types while explicitly addressing biases. It could fill an important gap at the intersection of human well-being science and AI alignment.
Hours To Execute (basic)
Hours to Execute (full)
Estd No of Collaborators
Financial Potential
Impact Breadth
Impact Depth
Impact Positivity
Impact Duration
Uniqueness
Implementability
Plausibility
Replicability
Market Timing
Project Type
Research