Curriculum Learning for RL Agents is Making a Comeback
By Yury Zhuk on January 5, 2026 ยท 1 min read
Screenshots from Kyoung Whan Choe's Training log
Old ML techniques are finding new life with modern tooling. Curriculum learning - gradually increasing task difficulty - is proving valuable for training RL agents.
Great post on HN today about curriculum learning for agents (The ๐ฟ๐ฒ๐ถ๐ป๐ณ๐ผ๐ฟ๐ฐ๐ฒ๐บ๐ฒ๐ป๐ ๐น๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด kind, not the LLM kind) ๐ https://kywch.github.io/blog/2025/12/curriculum-learning-2048-tetris/
Very cool to see curriculum learning pop up again; always thought it had more potential.
I used it back in 2019 to speed up training and improve performance for a vision model (great results in production!)
Also a fun throwback on the multi-agent side. I toyed with multi-agent RL almost 7 years ago: https://github.com/Teetertater/MARLAnts
Seeing these ideas come back around with better tooling and more compute is super exciting.
Need support for your AI project?
Let's work together!
Related Posts
Machine Learning on Big Data Workshop
Materials for the Machine Learning on Big Data workshop for Lumos Student DS Consulting
Architecting Reliable AI Agents for Production
What happens when your AI agent hallucinates a legal citation or a refund policy? Concrete ways to architect for reliability when deploying autonomous agents.
Are Knowledge Graphs in RAG better than regular vector RAG?
A simplified answer to when knowledge graphs add value to RAG systems versus when they just add unnecessary complexity.