ARC Prize 2025: Unlocking the Future of AI with Refinement Loops (2026)

The year 2025 has been a year of refinement and progress in the world of artificial general intelligence (AGI). We're thrilled to share the results and analysis of the ARC Prize 2025, a competition that has pushed the boundaries of AGI development. But before we dive into the winners and their achievements, let's take a moment to appreciate the significance of this year's theme: the refinement loop.

The Refinement Loop: Unlocking AGI Potential

At the heart of AGI progress in 2025 lies the concept of refinement loops. These loops are an iterative process, transforming one program into another, with the goal of optimizing it incrementally based on feedback. It's like a never-ending cycle of improvement, pushing the boundaries of what AI can achieve.

Two notable examples of this refinement process are Evolutionary Test-Time Compute (by J. Berman) and Evolutionary Program Synthesis (by E. Pang). Berman's approach uses natural language to evolve an ARC solution program, while Pang's method creates a dynamic program abstraction library to guide synthesis. Both approaches go through a two-phase refinement, exploring and verifying, until the program is fully refined and provides accurate answers.

Industry Progress: Commercial AI Systems Step Up

In 2025, we've witnessed significant advancements in commercial AI systems, particularly in their ability to handle refinement loops. Take the example of ARC-AGI task #4cd1b7b2. Gemini 3 Pro solved it using just 96 reasoning tokens, compared to Gemini 3 Deep Think's 138,000. This highlights the efficiency and effectiveness of these systems, especially when it comes to longer programs and more refinement.

One fascinating finding is that refinement loops can be added at the application layer, improving task reliability without relying solely on provider reasoning systems. This opens up new possibilities for enhancing the performance of commercial AI models.

Competition Progress: Unlocking New Training Approaches

The ARC Prize 2025 competition has not only showcased impressive results but also inspired new training approaches for deep learning models. Traditionally, deep learning models are trained using input/output pairs and gradient descent to create static neural networks. However, refinement loops are now becoming the basis for a different type of training.

The Tiny Recursive Model (TRM), for instance, achieved remarkable test accuracy on ARC-AGI-1 and ARC-AGI-2 with a tiny 7M parameter network. This model recursively improves its predicted answer, demonstrating an extremely parameter-efficient approach while minimizing overfitting.

Another novel example is CompressARC, which introduces only 76K parameters yet achieves impressive results on ARC-AGI-1. This solution minimizes the description length of each task at test time, allowing for generalization with a tiny network.

Open-Source Contributions: Pushing Boundaries Together

The ARC Prize 2025 winners and their open-source contributions have been instrumental in advancing AGI progress. All winning solutions and papers are now available for everyone to explore and build upon.

The Tiny Recursive Model (TRM) by Alexia Jolicoeur-Martineau, building on earlier Hierarchical Reasoning Model (HRM) work, showcases how recursive reasoning can lead to significant improvements. CompressARC by Isaac Liao demonstrates the power of minimizing description length, achieving remarkable results with minimal parameters.

The Future of ARC: Pushing Towards AGI

As we look ahead, the ARC Prize team is excited to announce the upcoming release of ARC-AGI-3 early next year. This new version will mark a significant format change, challenging interactive reasoning and requiring new AI capabilities.

ARC-AGI-3 will focus on key concepts such as exploration, planning, memory, goal acquisition, and alignment. Our early testing and studies show promising results, and we believe this new format will push the boundaries of what AI can achieve.

Conclusion: A Community Effort Towards AGI

The ARC Prize 2025 has been a testament to the power of collaboration and innovation. We are incredibly grateful to our partners, sponsors, and the entire ARC community for their support and dedication.

We'd like to give a special shout-out to our community members, especially Mark Barney and Simon Strandgaard, for their ongoing contributions and support. And a huge thank you to ARC Prize President Greg Kamradt, Bryan Landers, and our co-founder Francois Chollet for their vision and leadership.

As we wrap up 2025, we're excited to continue our journey towards AGI. If you're passionate about making an impact in this field, we invite you to join the ARC Prize team and be a part of this incredible community. Together, we can unlock the potential of AGI and shape the future of AI research.

ARC Prize 2025: Unlocking the Future of AI with Refinement Loops (2026)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Annamae Dooley

Last Updated:

Views: 6355

Rating: 4.4 / 5 (45 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Annamae Dooley

Birthday: 2001-07-26

Address: 9687 Tambra Meadow, Bradleyhaven, TN 53219

Phone: +9316045904039

Job: Future Coordinator

Hobby: Archery, Couponing, Poi, Kite flying, Knitting, Rappelling, Baseball

Introduction: My name is Annamae Dooley, I am a witty, quaint, lovely, clever, rich, sparkling, powerful person who loves writing and wants to share my knowledge and understanding with you.