Accurate forecasting is crucial for decision-making in fields like policy, finance, and science, but mid- and long-term predictions remain challenging to aggregate effectively. Traditional methods, which often rely on simple averages or weighted historical performance, may not fully capture a forecaster’s skill—especially for unresolved questions where data is sparse. One way to address this gap could be by using Full-Accuracy Scoring (FAS), a method that evaluates forecasters based on both their past accuracy and how their predictions for unresolved questions align with the aggregated forecast.
FAS combines two key metrics to assess forecasting skill:
By balancing these dimensions, FAS could identify skilled forecasters more quickly than traditional methods, particularly for long-term predictions where historical data is limited. For example, on platforms like Metaculus, FAS might help improve aggregated forecasts by weighting contributors more dynamically.
This approach could benefit:
Platforms might adopt FAS if it proves superior to existing methods, while forecasters could be motivated by faster rewards—though some might resist if their performance is exposed as weaker.
A minimal test could involve partnering with a forecasting platform to apply FAS to a subset of questions, comparing its performance against traditional aggregation. Key challenges might include:
If successful, FAS could be expanded across platforms, offering a more nuanced way to evaluate and aggregate forecasts—especially for long-term, uncertain events.
Hours To Execute (basic)
Hours to Execute (full)
Estd No of Collaborators
Financial Potential
Impact Breadth
Impact Depth
Impact Positivity
Impact Duration
Uniqueness
Implementability
Plausibility
Replicability
Market Timing
Project Type
Research