In this episode of the Colaberry AI Podcast, we spotlight Fathom-R1-14B, a powerful open-source large language model from Mumbai-based AI company Fractal. With 14 billion parameters, this model is designed to excel at mathematical reasoning and delivers performance comparable to closed-source o4-mini models—at a fraction of the cost.
Trained on Deepseek-R1-Distilled-Qwen-14B and released under the MIT license, Fathom-R1-14B comes with its training recipes and datasets fully open-sourced, fostering collaboration and community-driven progress. Its standout ability to solve olympiad-level problems like AIME-25 and HMMT-25 makes it a major step forward for accessible, high-level AI reasoning.
🎯 Key Takeaways:
⚡ 14B-parameter model specialized in mathematical reasoning
📘 Achieves olympiad-level accuracy (AIME-25, HMMT-25)
💸 Post-training cost: only $499
🔓 Open-source release under MIT license
🤝 Includes full datasets and training recipes for community use
🧾 Ref:
Fractal Releases Fathom-R1-14B Reasoning Model
🎧 Listen to our audio podcast:
👉 Colaberry AI Podcast
Stay Connected for Daily AI Breakdowns:
LinkedIn
YouTube
Twitter/X
Contact Us:
ai@colaberry.com
(972) 992-1024
Disclaimer:
This episode is created for educational purposes only. All rights to referenced materials belong to their respective owners. If you believe any content may be incorrect or violates copyright, kindly contact us at ai@colaberry.com, and we will address it promptly.
Share this post