The rapid advancement of AI has created a significant blind spot: while individual advanced chips are regulated, governments currently lack visibility into how these chips are combined into powerful clusters capable of training dangerous AI models. This gap makes it difficult to monitor and control potential risks from uncontrolled AI development.
One approach to address this could involve:
A phased approach might work:
This could begin unilaterally in chip-producing nations, then expand internationally through diplomatic coordination.
The system would need to carefully balance oversight with supporting legitimate research. Some ways this might be achieved include:
While not a complete solution, this approach could provide governments and safety organizations with crucial visibility into potential sources of uncontrolled AI development while creating accountability for high-power compute clusters.
Hours To Execute (basic)
Hours to Execute (full)
Estd No of Collaborators
Financial Potential
Impact Breadth
Impact Depth
Impact Positivity
Impact Duration
Uniqueness
Implementability
Plausibility
Replicability
Market Timing
Project Type
Research