Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Memory leak due to LLMEngine.seq_id_to_seq_group #14353

Closed
1 task done
NIL-zhuang opened this issue Mar 6, 2025 · 1 comment
Closed
1 task done

[Bug]: Memory leak due to LLMEngine.seq_id_to_seq_group #14353

NIL-zhuang opened this issue Mar 6, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@NIL-zhuang
Copy link

Your current environment

The output of `python collect_env.py`
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.17

Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-147.mt20200626.413.el8_1.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA L40
GPU 1: NVIDIA L40
GPU 2: NVIDIA L40
GPU 3: NVIDIA L40

Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                192
On-line CPU(s) list:   0-71
Off-line CPU(s) list:  72-191
Thread(s) per core:    0
Core(s) per socket:    48
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 143
Model name:            Intel(R) Xeon(R) Platinum 8468V
Stepping:              8
CPU MHz:               2899.875
CPU max MHz:           3800.0000
CPU min MHz:           800.0000
BogoMIPS:              4800.00
Virtualization:        VT-x
L1d cache:             48K
L1i cache:             32K
L2 cache:              2048K
L3 cache:              99840K
NUMA node0 CPU(s):     0-47,96-143
NUMA node1 CPU(s):     48-95,144-191
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities

Versions of relevant libraries:
[pip3] flashinfer-python==0.2.1.post2+cu124torch2.5
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pyzmq==26.2.1
[pip3] torch==2.5.1+cu121
[pip3] torchaudio==2.5.1
[pip3] torchdata==0.11.0
[pip3] torchvision==0.20.1
[pip3] transformers==4.49.0
[pip3] triton==3.1.0
[conda] flashinfer-python         0.2.1.post2+cu124torch2.5          pypi_0    pypi
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-cublas-cu12        12.1.3.1                 pypi_0    pypi
[conda] nvidia-cuda-cupti-cu12    12.1.105                 pypi_0    pypi
[conda] nvidia-cuda-nvrtc-cu12    12.1.105                 pypi_0    pypi
[conda] nvidia-cuda-runtime-cu12  12.1.105                 pypi_0    pypi
[conda] nvidia-cudnn-cu12         9.1.0.70                 pypi_0    pypi
[conda] nvidia-cufft-cu12         11.0.2.54                pypi_0    pypi
[conda] nvidia-curand-cu12        10.3.2.106               pypi_0    pypi
[conda] nvidia-cusolver-cu12      11.4.5.107               pypi_0    pypi
[conda] nvidia-cusparse-cu12      12.1.0.106               pypi_0    pypi
[conda] nvidia-nccl-cu12          2.21.5                   pypi_0    pypi
[conda] nvidia-nvjitlink-cu12     12.8.61                  pypi_0    pypi
[conda] nvidia-nvtx-cu12          12.1.105                 pypi_0    pypi
[conda] pyzmq                     26.2.1                   pypi_0    pypi
[conda] torch                     2.5.1+cu121              pypi_0    pypi
[conda] torchaudio                2.5.1                    pypi_0    pypi
[conda] torchvision               0.20.1                   pypi_0    pypi
[conda] transformers              4.49.0                   pypi_0    pypi
[conda] triton                    3.1.0                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    NIC6    NIC7    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      PIX     PXB     SYS     PXB     PXB     SYS     SYS     SYS     SYS     SYS     SYS     0-47    0               N/A
GPU1    PIX      X      PXB     SYS     PXB     PXB     SYS     SYS     SYS     SYS     SYS     SYS     0-47    0               N/A
GPU2    PXB     PXB      X      SYS     PIX     PIX     SYS     SYS     SYS     SYS     SYS     SYS     0-47    0               N/A
GPU3    SYS     SYS     SYS      X      SYS     SYS     SYS     SYS     PXB     PXB     SYS     SYS     48-71   1               N/A
NIC0    PXB     PXB     PIX     SYS      X      PIX     SYS     SYS     SYS     SYS     SYS     SYS
NIC1    PXB     PXB     PIX     SYS     PIX      X      SYS     SYS     SYS     SYS     SYS     SYS
NIC2    SYS     SYS     SYS     SYS     SYS     SYS      X      PIX     SYS     SYS     SYS     SYS
NIC3    SYS     SYS     SYS     SYS     SYS     SYS     PIX      X      SYS     SYS     SYS     SYS
NIC4    SYS     SYS     SYS     PXB     SYS     SYS     SYS     SYS      X      PIX     SYS     SYS
NIC5    SYS     SYS     SYS     PXB     SYS     SYS     SYS     SYS     PIX      X      SYS     SYS
NIC6    SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      PIX
NIC7    SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     PIX      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5
  NIC6: mlx5_6
  NIC7: mlx5_7

CUDNN_VERSION=8.9.4.25
NVIDIA_REQUIRE_CUDA=cuda>=11.4 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 driver>=450
NVIDIA_VISIBLE_DEVICES=GPU-e2172153-4965-45f3-2cab-55d55316b64a,GPU-f47f6186-d89c-0b59-7fd9-d84a39859395,GPU-28b27fcb-18d4-5f33-cbbf-93977970f9ec,GPU-523efc59-4e5b-9558-051c-965b979eb9c6
LD_LIBRARY_PATH=/usr/local/conda/lib/python3.10/site-packages/cv2/../../lib64:/lib64:/usr/lib64:/lib64:/usr/lib64:/lib64:/usr/lib64:/lib64:/usr/lib64:/lib64:/usr/lib64:/lib64:/usr/lib64:/lib64:/usr/lib64:/lib64:/usr/lib64::/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/java/jre/lib/amd64/server:/opt/meituan/hadoop/lib/native:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/java/jre/lib/amd64/server:/opt/meituan/hadoop/lib/native
NCCL_IB_GID_INDEX=7
NVIDIA_DRIVER_CAPABILITIES=compute,utility
OMP_NUM_THREADS=72
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY

🐛 Describe the bug

vLLM does not release the LLMEngine.seq_id_to_seq_group, so the requests are always stored in memory, eventually leading to excessive memory usage and Out-of-Memory (OOM) errors.

Code to reproduce:

import torch
from torch.distributed import breakpoint
from vllm import LLM, SamplingParams
from PIL import Image
import resource
import math
from datetime import datetime

MODEL_PATH = "Qwen/Qwen2.5-VL-3B-Instruct"
TEST_IMG_PATH = "test.jpeg"

VISION_PROMPT_TEMPLATE = (
    "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n"
    "<|im_start|>user\nQuestion No.{idx:08}<|vision_start|><|image_pad|><|vision_end|>"
    "Conclude this image in 10 words.<|im_end|>\n<|im_start|>assistant\n<think>"
)


def construct_query(idx: int, max_pixels: int = 2048 * 2048, min_pixels: int = 512 * 512):
    prompt = VISION_PROMPT_TEMPLATE.format(idx=idx)
    image = Image.open(TEST_IMG_PATH)

    if (image.width * image.height) > max_pixels:
        resize_factor = math.sqrt(max_pixels / (image.width * image.height))
        width, height = int(image.width * resize_factor), int(image.height * resize_factor)
        image = image.resize((width, height))

    if (image.width * image.height) < min_pixels:
        resize_factor = math.sqrt(min_pixels / (image.width * image.height))
        width, height = int(image.width * resize_factor), int(image.height * resize_factor)
        image = image.resize((width, height))

    if image.mode != "RGB":
        image = image.convert("RGB")

    return {"prompt": prompt, "multi_modal_data": {"image": image}}


class vLLMRollout:
    def __init__(self):
        self.sampling_params = SamplingParams(n=2, top_p=1.0, temperature=1.0, ignore_eos=False, max_tokens=10)
        self.inference_engine = LLM(
            model=MODEL_PATH,
            enable_sleep_mode=True,
            tensor_parallel_size=1,
            distributed_executor_backend="external_launcher",
            dtype="bfloat16",
            enforce_eager=True,
            gpu_memory_utilization=0.7,
            disable_custom_all_reduce=True,
            skip_tokenizer_init=False,
            max_model_len=4096,
            disable_log_stats=True,
            enable_prefix_caching=True,
        )

    @torch.no_grad()
    def generate_sequences(self, batch):
        outputs = self.inference_engine.generate(batch, self.sampling_params, use_tqdm=False)
        return outputs


def main():
    # log the memory usage every 20 inference steps
    BATCH_SIZE = 20
    LOG_PER_BATCH = 200

    total_cnt = 0
    rollout = vLLMRollout()
    memory_cost = []

    def print_red(s: str):
        print(f"\033[31m{s}\033[0m")

    while True:
        if total_cnt % LOG_PER_BATCH == 0:
            time = datetime.now().strftime(r"%m-%d %H:%M:%S")
            mem_usage = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
            mem_usage = round(mem_usage / (1024**2), 2)
            memory_cost.append(mem_usage)
            print("===========================================")
            print_red(f"[MEMORY {time} already inferred {total_cnt} queries] Memory usage: {mem_usage:.2f} GB")
            print(memory_cost)
            if total_cnt > 0:
                breakpoint()
            print("===========================================")

        data = [construct_query(idx) for idx in range(total_cnt, total_cnt + BATCH_SIZE)]
        rollout.generate_sequences(data)
        seq_id_to_seq_group_size = len(rollout.inference_engine.llm_engine.seq_id_to_seq_group)
        print_red(f"seq_id_to_seq_group has {seq_id_to_seq_group_size} sequences")
        total_cnt += BATCH_SIZE


if __name__ == "__main__":
    main()

Run the code above and you will find that the size of seq_id_to_seq_group keeps growing, while the memory usage is increasing. This is the direct cause of issue hiyouga/EasyR1#50 and volcengine/verl#429

This is because LLMEngine.seq_id_to_seq_group stores all the input requests, including both token_id and image tensor.

Image

Image

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@hiyouga
Copy link

hiyouga commented Mar 6, 2025

Should be solved in #14326

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants