Curriculum Learning for RL Agents is Making a Comeback

By Yury Zhuk on January 5, 2026 ยท 1 min read

Curriculum Learning for RL Agents is Making a Comeback

Screenshots from Kyoung Whan Choe's Training log

Old ML techniques are finding new life with modern tooling. Curriculum learning - gradually increasing task difficulty - is proving valuable for training RL agents.

#AI #reinforcement learning

Great post on HN today about curriculum learning for agents (The ๐—ฟ๐—ฒ๐—ถ๐—ป๐—ณ๐—ผ๐—ฟ๐—ฐ๐—ฒ๐—บ๐—ฒ๐—ป๐˜ ๐—น๐—ฒ๐—ฎ๐—ฟ๐—ป๐—ถ๐—ป๐—ด kind, not the LLM kind) ๐Ÿ˜… https://kywch.github.io/blog/2025/12/curriculum-learning-2048-tetris/

Very cool to see curriculum learning pop up again; always thought it had more potential.

I used it back in 2019 to speed up training and improve performance for a vision model (great results in production!)

Also a fun throwback on the multi-agent side. I toyed with multi-agent RL almost 7 years ago: https://github.com/Teetertater/MARLAnts

Seeing these ideas come back around with better tooling and more compute is super exciting.

Need support for your AI project?

Let's work together!

Related Posts

โ† Back to Blog