Skip to content

volcengine/verl

Repository files navigation

verl: Volcano Engine Reinforcement Learning for LLM

verl is a flexible, efficient and production-ready RL training library for large language models (LLMs).

verl is the open-source version of HybridFlow: A Flexible and Efficient RLHF Framework paper.

verl is flexible and easy to use with:

  • Easy extension of diverse RL algorithms: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code.

  • Seamless integration of existing LLM infra with modular APIs: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks.

  • Flexible device mapping: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes.

  • Readily integration with popular HuggingFace models

verl is fast with:

  • State-of-the-art throughput: By seamlessly integrating existing SOTA LLM training and inference frameworks, verl achieves high generation and training throughput.

  • Efficient actor model resharding with 3D-HybridEngine: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases.

| Documentation | Paper | Slack | Wechat | Twitter

News

  • [2025/3] We will present verl(HybridFlow) at EuroSys 2025. See you in Rotterdam!
  • [2025/3] We will introduce the programming model of verl at the vLLM Beijing Meetup on 3/16. See you in Beijing!
  • [2025/2] verl v0.2.0.post1 is released! See release note for details.
  • [2025/2] We presented verl in the Bytedance/NVIDIA/Anyscale Ray Meetup. See you in San Jose!
  • [2025/1] Doubao-1.5-pro is released with SOTA-level performance on LLM & VLM. The RL scaling preview model is trained using verl, reaching OpenAI O1-level performance on math benchmarks (70.0 pass@1 on AIME).
  • [2024/12] The team presented Post-training LLMs: From Algorithms to Infrastructure at NeurIPS 2024. Slides and video available.
  • [2024/12] verl is presented at Ray Forward 2024. Slides available here.
  • [2024/10] verl is presented at Ray Summit. Youtube video available.
  • [2024/08] HybridFlow (verl) is accepted to EuroSys 2025.

Key Features

  • FSDP and Megatron-LM for training.
  • vLLM and HF Transformers for rollout generation, SGLang support coming soon.
  • Compatible with Hugging Face Transformers and Modelscope Hub.
  • Supervised fine-tuning.
  • Reinforcement learning with PPO, GRPO, ReMax, Reinforce++, RLOO, etc.
    • Support model-based reward and function-based reward (verifiable reward)
    • Support vision-language models (VLMs) and multi-modal RL
  • Flash attention 2, sequence packing, sequence parallelism support via DeepSpeed Ulysses, LoRA, Liger-kernel.
  • Scales up to 70B models and hundreds of GPUs.
  • Experiment tracking with wandb, swanlab, mlflow and tensorboard.

Upcoming Features

  • Reward model training
  • DPO training
  • DeepSeek integration with Megatron v0.11
  • SGLang integration

Getting Started

Quickstart:

Running a PPO example step-by-step:

Reproducible algorithm baselines:

For code explanation and advance usage (extension):

Blogs from the community

Checkout this Jupyter Notebook to get started with PPO training with a single 24GB L4 GPU (FREE GPU quota provided by Lighting Studio)!

Performance Tuning Guide

The performance is essential for on-policy RL algorithm. We write a detailed performance tuning guide to allow people tune the performance. See here for more details.

vLLM v0.7 integration preview

We have released a testing version of veRL that supports vLLM>=0.7.0. Please refer to this document for installation guide and more information.

Citation and acknowledgement

If you find the project helpful, please cite:

@article{sheng2024hybridflow,
  title   = {HybridFlow: A Flexible and Efficient RLHF Framework},
  author  = {Guangming Sheng and Chi Zhang and Zilingfeng Ye and Xibin Wu and Wang Zhang and Ru Zhang and Yanghua Peng and Haibin Lin and Chuan Wu},
  year    = {2024},
  journal = {arXiv preprint arXiv: 2409.19256}
}

verl is inspired by the design of Nemo-Aligner, Deepspeed-chat and OpenRLHF. The project is adopted and supported by Anyscale, Bytedance, LMSys.org, Shanghai AI Lab, Tsinghua University, UC Berkeley, UCLA, UIUC, University of Hong Kong, and many more.

Awesome work using verl

  • TinyZero: a reproduction of DeepSeek R1 Zero recipe for reasoning tasks
  • PRIME: Process reinforcement through implicit rewards
  • RAGEN: a general-purpose reasoning agent training framework
  • Logic-RL: a reproduction of DeepSeek R1 Zero on 2K Tiny Logic Puzzle Dataset.
  • SkyThought: RL training for Sky-T1-7B by NovaSky AI team.
  • deepscaler: iterative context scaling with GRPO
  • critic-rl: LLM critics for code generation
  • Easy-R1: Multi-modal RL training framework
  • self-rewarding-reasoning-LLM: self-rewarding and correction with generative reward models
  • Search-R1: RL with reasoning and searching (tool-call) interleaved LLMs
  • Code-R1: Reproducing R1 for Code with Reliable Rewards
  • DQO: Enhancing multi-Step reasoning abilities of language models through direct Q-function optimization
  • FIRE: Flaming-hot initiation with regular execution sampling for large language models
  • ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning
  • DeepRetrieval: Let LLMs learn to search and retrieve desirable docs with RL
  • cognitive-behaviors: Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective STaRs

Contribution Guide

Contributions from the community are welcome! Please checkout our roadmap and release plan.

Code formatting

We use yapf (Google style) to enforce strict code formatting when reviewing PRs. To reformat you code locally, make sure you installed latest yapf

pip3 install yapf --upgrade

Then, make sure you are at top level of verl repo and run

bash scripts/format.sh

We are HIRING! Send us an email if you are interested in internship/FTE opportunities in MLSys/LLM reasoning/multimodal alignment.