Workshop on Reinforcement Learning Beyond Rewards: Towards Scalable General-Purpose Agents
Reinforcement Learning Conference (RLC) 2026
August 15, 2026
@RLBRew_RLC ยท #RLBRew_RLC
Call for Papers
We encourage submissions of up to 10 pages of content (with no limit on references and supplementary material) that have not been previously accepted at an archival venue (such as ICML, NeurIPS, ICLR) and also allow submissions of recently accepted papers which will be judged to a higher standard. While up to 10 pages of content is allowed, we strongly encourage authors to limit their submission to 4-8 pages, to ensure higher quality reviewer feedback. Papers should be anonymized for double blind review.
Paper length: up to 10 content pages (excluding the references, acknowledgements, and appendices).
Format: RLC author kit (The cover page is optional) (overleaf link).
Topics of Interest . We will encourage discussions and contributions around (but not limited to) the following topics that are important in the context of reward-free RL:
- Reward-Free Task Specification: Learning from preferences, language, cross-embodiment, social constraints, safety, demonstrations, implicit human feedback.
- Utilizing large-scale reward-free data: Learning representations, skills, from video, and novel datasets.
- Utilizing foundational models for efficient adaptation/finetuning.
- Using reward-free interactions: exploration, sample efficiency, unsupervised skill learning, self-supervised objectives, goal-conditioning, and model learning for RL.
| Submission deadline | May 23, 2026, 11:59 PM AOE |
| Decisions announced | June 6, 2026, 11:59 PM AOE |
| Day of workshop | August 15, 2026 |
Review and Selection Process
- All reviews will be double-blind.
- All papers that are accepted will have posters at the workshop
- Select papers will be given spotlight presentations.
- Papers will have non-archival status
- At least one-co-author of an accepted paper is expected to register for RLC 2026 and attend the poster sessions.
- All accepted papers will be made public on Openreview