As frontier AI systems advance, policymakers increasingly focus on controlling access to compute resources like GPUs to regulate AI development. However, this approach assumes compute will remain the primary bottleneck—an assumption that may not hold as other constraints like algorithmic progress, financial limitations, or data scarcity become more significant. This creates a gap in governance strategies, which risk becoming obsolete if they fail to account for these shifting bottlenecks.
One way to address this gap could involve systematically investigating the limitations of compute-centric governance and identifying emerging constraints. This might include:
The output could be a research report or series of papers highlighting vulnerabilities in current approaches and proposing more holistic strategies.
Key beneficiaries might include policymakers seeking resilient governance frameworks, AI labs anticipating future constraints, and researchers exploring AI safety. Execution could follow a phased approach:
A minimal viable product could involve a preliminary expert survey to test assumptions about future bottlenecks.
Unlike existing efforts—such as US chip export policies or academic research on algorithmic efficiency—this approach would explicitly connect technical trends to governance implications. For example, while supply chain mapping reports focus on semiconductors broadly, this work could tailor insights specifically to AI policy needs.
By examining compute governance limitations and alternative bottlenecks, this work could inform more resilient strategies, combining technical, economic, and geopolitical analysis.
Hours To Execute (basic)
Hours to Execute (full)
Estd No of Collaborators
Financial Potential
Impact Breadth
Impact Depth
Impact Positivity
Impact Duration
Uniqueness
Implementability
Plausibility
Replicability
Market Timing
Project Type
Research