Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v0.7.1rc1] FAQ & Feedback #19

Open
Yikun opened this issue Feb 8, 2025 · 14 comments
Open

[v0.7.1rc1] FAQ & Feedback #19

Yikun opened this issue Feb 8, 2025 · 14 comments

Comments

@Yikun
Copy link
Collaborator

Yikun commented Feb 8, 2025

Please leave comments here about your usage of vLLM Ascend Plugin.

Does it work? Does it not work? Which models do you need? Which feature do you need? any bugs?

For in depth discussion, please feel free to join #sig-ascend in the vLLM Slack workspace.

Next RC release: v0.7.3rc1 will ready in early March (2025.03).


FAQ:

1. What devices are currently supported?

Currently, only Atlas A2 series are supported.

  • Atlas A2 Training series (Atlas 800T A2, Atlas 900 A2 PoD, Atlas 200T A2 Box16, Atlas 300T A2)
  • Atlas 800I A2 Inference series (Atlas 800I A2)

2. How to setup dev env, build and test?

Here is a step by step guide for building and testing.

If you just want to install stable vLLM, please refer to: https://vllm-ascend.readthedocs.io/en/latest/installation.html

3. How to do multi node deployment?

You can launch multi-node service with Ray, find more details at our tutorials: Online Serving on Multi Machine.

  • ray: command not found: pip install ray
  • fatal error :numa.h:No such file or directory: yum install numactl-devel / apt install libnuma-dev

4. RuntimeError: Failed to infer device type or ImportError: libatb.so: cannot open shared object file: No such file or directory.

This is usually because of the wrong torch_npu version or lack of Ascend CANN NNAL Package.

Make sure you install the correct version of torch_npu.

Install with the specific CANN and NNAL.

The details of torch_npu and CANN NNAL could be found at our docs.

5. Is Atlas 300 currently supported?

Not supported yet, currently only Atlas A2 series devices supported as shown here.

From a technical view, vllm-ascend support would be possible if the torch-npu is supported. Otherwise, we have to implement it by using custom ops. We are also welcome to join us to improve together.

6. Are Quantization algorithms currently supported?

Not support now, but we will support W8A8 and FA3 quantization algorithms in the future.

7. Inference speed is slow.

Currently, the performance of vLLM on Ascend still need to be improved. We are also working together with the Ascend team to improve it. The first release will be v0.7.3 in 2025 Q1. Therefore, welcome everyone join us to improve it.

8. DeepSeek V3 / R1 related errors.

Known issue will be fixed in vllm-ascend v0.7.3rc1 (March. 2025) with CANN 8.1.RC1.alpha001 (Feb. 2025):

  • AssertionError: Torch not compiled with CUDA enabled.
  • RuntimeError: GroupTopkOperation CreateOperation failed.
  • ValueError: Unknown quantization method: ascend.
  • ...

Find more details in #72, which tracks initial support for the Deepseek V3 model with vllm-ascend.

9. Qwen2-VL / Qwen2.5-VL related errors.

Q1: Qwen2-VL-72B-Instruct inference failure: RuntimeError: call aclnnFlashAttentionScore failed. (#115)

This is caused by the inner error of CANN ops, which will be fixed in the next CANN version.

BTW, qwen2 in vllm only works with torch SDPA on non-GPU platform. We'll improve it in vLLM to make it support more backend in the next release. Find more details here.

10 Error: TBE Subprocess Task Distribute Failure When TP>1

(#198)

[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!

It's not that the model wasn't loaded successfully, but that the model wasn't exited successfully. Adding code related to manually cleaning up objects, with reference to the tutorials, can resolve this error.


(Updated on: 2025.03.06)

@Yikun Yikun pinned this issue Feb 8, 2025
@Yikun Yikun changed the title [Alhpa] FAQ & Feedback [v0.7.1rc1] FAQ & Feedback Feb 17, 2025
@shannanyinxiang
Copy link

Any plans to support qwen2.5-vl?

@Yikun
Copy link
Collaborator Author

Yikun commented Feb 17, 2025

Any plans to support qwen2.5-vl?

@shannanyinxiang According our test, the qwen2.5-vl(qwen2-vl) already work, you can have a try. If you encounter any problems, pls feel free to raise a issue, we are also welcome to contribute the doc (like #53).

@shannanyinxiang
Copy link

Any plans to support qwen2.5-vl?

@shannanyinxiang According our test, the qwen2.5-vl already work, you can have a try. If you encounter any problems, pls feel free to raise a issue, we are also welcome to contribute the doc (like #53).

Thank you for your prompt reply!

@invokerbyxv
Copy link

Any plans to support qwen2.5-vl?

@shannanyinxiang According our test, the qwen2.5-vl(qwen2-vl) already work, you can have a try. If you encounter any problems, pls feel free to raise a issue, we are also welcome to contribute the doc (like #53).

方便分享一下qwen2-vl的启动参数吗?

@whu-dft

This comment has been minimized.

@wangxiyuan
Copy link
Collaborator

@whu-dft please follow the install guide https://vllm-ascend.readthedocs.io/en/v0.7.1rc1/installation.html

pip install vllm vllm-ascend doesn't work currentlly. We'll make it avaliable in the next release.

@whu-dft
Copy link

whu-dft commented Feb 21, 2025

Thanks!

@sisrfeng
Copy link

Is there any table comparing vllm-ascend V.S MindIE considering speed, model support, etc ?

@Infinite666
Copy link

Infinite666 commented Feb 22, 2025

Same as above, need performance of vllm-ascend based on different hardware. We tested both vllm-ascend and mindIE on 910B, seems like the performance of mindIE is better.

@Yikun
Copy link
Collaborator Author

Yikun commented Feb 22, 2025

@Infinite666 @sisrfeng Thanks for your feedback.

Currently, the performance and accuracy of vLLM on Ascend still need to be improved. We are also working together with the MindIE team to improve it. The first release will be v0.7.3 in 2025 Q1. Therefore, in the short term, we will still focus on the performance improvement of vLLM Ascend, and welcome everyone join us to improve it.

@shen-shanshan

This comment has been minimized.

@WWCTF
Copy link

WWCTF commented Mar 5, 2025

请问,我在部署DeepSeek-R1-Distill-70B模型的时候,启动命令:python3 -m vllm.entrypoints.openai.api_server --model /workspace/models/DeepSeek-R1-Distill-70B/ --tensor-parallel-size 8

报错如下
(VllmWorkerProcess pid=20483) INFO 03-05 07:15:00 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
(VllmWorkerProcess pid=20485) INFO 03-05 07:15:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
(VllmWorkerProcess pid=20487) INFO 03-05 07:15:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
(VllmWorkerProcess pid=20489) INFO 03-05 07:15:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
(VllmWorkerProcess pid=20491) INFO 03-05 07:15:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
(VllmWorkerProcess pid=20493) INFO 03-05 07:15:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
(VllmWorkerProcess pid=20495) INFO 03-05 07:15:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
INFO 03-05 07:15:23 __init__.py:28] Available plugins for group vllm.platform_plugins:
INFO 03-05 07:15:23 __init__.py:30] name=ascend, value=vllm_ascend:register
INFO 03-05 07:15:23 __init__.py:32] all available plugins for group vllm.platform_plugins will be loaded.
INFO 03-05 07:15:23 __init__.py:34] set environment variable VLLM_PLUGINS to control which plugins to load.
INFO 03-05 07:15:23 __init__.py:42] plugin ascend loaded.
INFO 03-05 07:15:24 __init__.py:28] Available plugins for group vllm.platform_plugins:
INFO 03-05 07:15:24 __init__.py:30] name=ascend, value=vllm_ascend:register
INFO 03-05 07:15:24 __init__.py:32] all available plugins for group vllm.platform_plugins will be loaded.
INFO 03-05 07:15:24 __init__.py:34] set environment variable VLLM_PLUGINS to control which plugins to load.
INFO 03-05 07:15:24 __init__.py:42] plugin ascend loaded.
INFO 03-05 07:15:24 __init__.py:187] No platform detected, vLLM is running on UnspecifiedPlatform
WARNING 03-05 07:15:24 _custom_ops.py:19] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
INFO 03-05 07:15:24 __init__.py:174] Platform plugin ascend is activated
INFO 03-05 07:15:37 shm_broadcast.py:256] vLLM message queue communication handle: Handle(connect_ip='127.0.0.1', local_reader_ranks=[1, 2, 3, 4, 5, 6, 7], buffer_handle=(7, 4194304, 6, 'psm_e6b6e72b'), local_subscribe_port=40457, remote_subscribe_port=None)
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model_runner.load_model()
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 377, in load_model
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     model = _initialize_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 119, in _initialize_model
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return model_class(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 488, in __init__
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = self._init_model(vllm_config=vllm_config,
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 525, in _init_model
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return LlamaModel(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 149, in __init__
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 321, in __init__
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.start_layer, self.end_layer, self.layers = make_layers(
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 555, in make_layers
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     [PPMissingLayer() for _ in range(start_layer)] + [
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 556, in <listcomp>
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 323, in <lambda>
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     lambda prefix: layer_type(config=config,
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 234, in __init__
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.self_attn = LlamaAttention(
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 135, in __init__
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.qkv_proj = QKVParallelLinear(
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 728, in __init__
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     super().__init__(input_size=input_size,
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 311, in __init__
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.quant_method.create_weights(
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 127, in create_weights
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     weight = Parameter(torch.empty(sum(output_partition_sizes),
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/torch/utils/_device.py", line 106, in __torch_function__
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20495) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] RuntimeError: NPU out of memory. Tried to allocate 22.00 MiB (NPU 7; 60.97 GiB total capacity; 10.24 GiB already allocated; 10.24 GiB current active; 21.36 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model_runner.load_model()
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 377, in load_model
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     model = _initialize_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 119, in _initialize_model
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return model_class(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 488, in __init__
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = self._init_model(vllm_config=vllm_config,
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 525, in _init_model
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return LlamaModel(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 149, in __init__
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 321, in __init__
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.start_layer, self.end_layer, self.layers = make_layers(
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 555, in make_layers
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     [PPMissingLayer() for _ in range(start_layer)] + [
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 556, in <listcomp>
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 323, in <lambda>
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     lambda prefix: layer_type(config=config,
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 234, in __init__
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.self_attn = LlamaAttention(
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 135, in __init__
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.qkv_proj = QKVParallelLinear(
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 728, in __init__
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     super().__init__(input_size=input_size,
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 311, in __init__
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.quant_method.create_weights(
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 127, in create_weights
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     weight = Parameter(torch.empty(sum(output_partition_sizes),
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/torch/utils/_device.py", line 106, in __torch_function__
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20491) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] RuntimeError: NPU out of memory. Tried to allocate 22.00 MiB (NPU 5; 60.97 GiB total capacity; 10.24 GiB already allocated; 10.24 GiB current active; 21.36 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model_runner.load_model()
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 377, in load_model
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     model = _initialize_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 119, in _initialize_model
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return model_class(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 488, in __init__
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = self._init_model(vllm_config=vllm_config,
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 525, in _init_model
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return LlamaModel(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 149, in __init__
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 321, in __init__
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.start_layer, self.end_layer, self.layers = make_layers(
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 555, in make_layers
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     [PPMissingLayer() for _ in range(start_layer)] + [
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 556, in <listcomp>
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 323, in <lambda>
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     lambda prefix: layer_type(config=config,
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 234, in __init__
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.self_attn = LlamaAttention(
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 135, in __init__
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.qkv_proj = QKVParallelLinear(
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 728, in __init__
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     super().__init__(input_size=input_size,
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 311, in __init__
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.quant_method.create_weights(
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 127, in create_weights
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     weight = Parameter(torch.empty(sum(output_partition_sizes),
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/torch/utils/_device.py", line 106, in __torch_function__
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20489) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] RuntimeError: NPU out of memory. Tried to allocate 22.00 MiB (NPU 4; 60.97 GiB total capacity; 10.24 GiB already allocated; 10.24 GiB current active; 19.34 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model_runner.load_model()
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 377, in load_model
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     model = _initialize_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 119, in _initialize_model
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return model_class(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 488, in __init__
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = self._init_model(vllm_config=vllm_config,
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 525, in _init_model
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return LlamaModel(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 149, in __init__
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 321, in __init__
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.start_layer, self.end_layer, self.layers = make_layers(
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 555, in make_layers
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     [PPMissingLayer() for _ in range(start_layer)] + [
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 556, in <listcomp>
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 323, in <lambda>
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     lambda prefix: layer_type(config=config,
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 234, in __init__
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.self_attn = LlamaAttention(
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 135, in __init__
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.qkv_proj = QKVParallelLinear(
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 728, in __init__
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     super().__init__(input_size=input_size,
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 311, in __init__
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.quant_method.create_weights(
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 127, in create_weights
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     weight = Parameter(torch.empty(sum(output_partition_sizes),
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/torch/utils/_device.py", line 106, in __torch_function__
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20487) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] RuntimeError: NPU out of memory. Tried to allocate 22.00 MiB (NPU 3; 60.97 GiB total capacity; 10.24 GiB already allocated; 10.24 GiB current active; 21.24 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model_runner.load_model()
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 377, in load_model
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     model = _initialize_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 119, in _initialize_model
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return model_class(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 488, in __init__
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = self._init_model(vllm_config=vllm_config,
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 525, in _init_model
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return LlamaModel(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 149, in __init__
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 321, in __init__
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.start_layer, self.end_layer, self.layers = make_layers(
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 555, in make_layers
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     [PPMissingLayer() for _ in range(start_layer)] + [
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 556, in <listcomp>
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 323, in <lambda>
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     lambda prefix: layer_type(config=config,
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 234, in __init__
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.self_attn = LlamaAttention(
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 135, in __init__
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.qkv_proj = QKVParallelLinear(
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 728, in __init__
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     super().__init__(input_size=input_size,
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 311, in __init__
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.quant_method.create_weights(
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 127, in create_weights
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     weight = Parameter(torch.empty(sum(output_partition_sizes),
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/torch/utils/_device.py", line 106, in __torch_function__
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20483) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] RuntimeError: NPU out of memory. Tried to allocate 22.00 MiB (NPU 1; 60.97 GiB total capacity; 10.24 GiB already allocated; 10.24 GiB current active; 19.76 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model_runner.load_model()
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 377, in load_model
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     model = _initialize_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 119, in _initialize_model
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return model_class(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 488, in __init__
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = self._init_model(vllm_config=vllm_config,
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 525, in _init_model
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return LlamaModel(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 149, in __init__
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 321, in __init__
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.start_layer, self.end_layer, self.layers = make_layers(
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 555, in make_layers
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     [PPMissingLayer() for _ in range(start_layer)] + [
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 556, in <listcomp>
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 323, in <lambda>
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     lambda prefix: layer_type(config=config,
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 234, in __init__
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.self_attn = LlamaAttention(
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 135, in __init__
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.qkv_proj = QKVParallelLinear(
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 728, in __init__
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     super().__init__(input_size=input_size,
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 311, in __init__
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.quant_method.create_weights(
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 127, in create_weights
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     weight = Parameter(torch.empty(sum(output_partition_sizes),
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/torch/utils/_device.py", line 106, in __torch_function__
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20493) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] RuntimeError: NPU out of memory. Tried to allocate 22.00 MiB (NPU 6; 60.97 GiB total capacity; 10.24 GiB already allocated; 10.24 GiB current active; 19.79 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
ERROR 03-05 07:15:39 engine.py:387] NPU out of memory. Tried to allocate 22.00 MiB (NPU 0; 60.97 GiB total capacity; 10.24 GiB already allocated; 10.24 GiB current active; 19.52 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
ERROR 03-05 07:15:39 engine.py:387] Traceback (most recent call last):
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine
ERROR 03-05 07:15:39 engine.py:387]     engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 121, in from_engine_args
ERROR 03-05 07:15:39 engine.py:387]     return cls(ipc_path=ipc_path,
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 73, in __init__
ERROR 03-05 07:15:39 engine.py:387]     self.engine = LLMEngine(*args, **kwargs)
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 271, in __init__
ERROR 03-05 07:15:39 engine.py:387]     self.model_executor = executor_class(vllm_config=vllm_config, )
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 260, in __init__
ERROR 03-05 07:15:39 engine.py:387]     super().__init__(*args, **kwargs)
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 49, in __init__
ERROR 03-05 07:15:39 engine.py:387]     self._init_executor()
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 123, in _init_executor
ERROR 03-05 07:15:39 engine.py:387]     self._run_workers("load_model",
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 183, in _run_workers
ERROR 03-05 07:15:39 engine.py:387]     driver_worker_output = run_method(self.driver_worker, sent_method,
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
ERROR 03-05 07:15:39 engine.py:387]     return func(*args, **kwargs)
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
ERROR 03-05 07:15:39 engine.py:387]     self.model_runner.load_model()
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
ERROR 03-05 07:15:39 engine.py:387]     self.model = get_model(vllm_config=self.vllm_config)
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
ERROR 03-05 07:15:39 engine.py:387]     return loader.load_model(vllm_config=vllm_config)
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 377, in load_model
ERROR 03-05 07:15:39 engine.py:387]     model = _initialize_model(vllm_config=vllm_config)
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 119, in _initialize_model
ERROR 03-05 07:15:39 engine.py:387]     return model_class(vllm_config=vllm_config, prefix=prefix)
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 488, in __init__
ERROR 03-05 07:15:39 engine.py:387]     self.model = self._init_model(vllm_config=vllm_config,
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 525, in _init_model
ERROR 03-05 07:15:39 engine.py:387]     return LlamaModel(vllm_config=vllm_config, prefix=prefix)
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 149, in __init__
ERROR 03-05 07:15:39 engine.py:387]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 321, in __init__
ERROR 03-05 07:15:39 engine.py:387]     self.start_layer, self.end_layer, self.layers = make_layers(
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 555, in make_layers
ERROR 03-05 07:15:39 engine.py:387]     [PPMissingLayer() for _ in range(start_layer)] + [
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 556, in <listcomp>
ERROR 03-05 07:15:39 engine.py:387]     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 323, in <lambda>
ERROR 03-05 07:15:39 engine.py:387]     lambda prefix: layer_type(config=config,
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 234, in __init__
ERROR 03-05 07:15:39 engine.py:387]     self.self_attn = LlamaAttention(
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 135, in __init__
ERROR 03-05 07:15:39 engine.py:387]     self.qkv_proj = QKVParallelLinear(
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 728, in __init__
ERROR 03-05 07:15:39 engine.py:387]     super().__init__(input_size=input_size,
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 311, in __init__
ERROR 03-05 07:15:39 engine.py:387]     self.quant_method.create_weights(
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 127, in create_weights
ERROR 03-05 07:15:39 engine.py:387]     weight = Parameter(torch.empty(sum(output_partition_sizes),
ERROR 03-05 07:15:39 engine.py:387]   File "/usr/local/python3.10/lib/python3.10/site-packages/torch/utils/_device.py", line 106, in __torch_function__
ERROR 03-05 07:15:39 engine.py:387]     return func(*args, **kwargs)
ERROR 03-05 07:15:39 engine.py:387] RuntimeError: NPU out of memory. Tried to allocate 22.00 MiB (NPU 0; 60.97 GiB total capacity; 10.24 GiB already allocated; 10.24 GiB current active; 19.52 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model_runner.load_model()
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 377, in load_model
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     model = _initialize_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 119, in _initialize_model
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return model_class(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 488, in __init__
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.model = self._init_model(vllm_config=vllm_config,
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 525, in _init_model
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return LlamaModel(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 149, in __init__
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 321, in __init__
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.start_layer, self.end_layer, self.layers = make_layers(
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 555, in make_layers
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     [PPMissingLayer() for _ in range(start_layer)] + [
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 556, in <listcomp>
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 323, in <lambda>
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     lambda prefix: layer_type(config=config,
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 234, in __init__
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.self_attn = LlamaAttention(
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 135, in __init__
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.qkv_proj = QKVParallelLinear(
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 728, in __init__
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     super().__init__(input_size=input_size,
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 311, in __init__
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     self.quant_method.create_weights(
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 127, in create_weights
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     weight = Parameter(torch.empty(sum(output_partition_sizes),
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]   File "/usr/local/python3.10/lib/python3.10/site-packages/torch/utils/_device.py", line 106, in __torch_function__
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240]     return func(*args, **kwargs)
(VllmWorkerProcess pid=20485) ERROR 03-05 07:15:39 multiproc_worker_utils.py:240] RuntimeError: NPU out of memory. Tried to allocate 22.00 MiB (NPU 2; 60.97 GiB total capacity; 10.24 GiB already allocated; 10.24 GiB current active; 23.06 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/local/python3.10/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/local/python3.10/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 389, in run_mp_engine
    raise e
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine
    engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 121, in from_engine_args
    return cls(ipc_path=ipc_path,
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 73, in __init__
    self.engine = LLMEngine(*args, **kwargs)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 271, in __init__
    self.model_executor = executor_class(vllm_config=vllm_config, )
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 260, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 49, in __init__
    self._init_executor()
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 123, in _init_executor
    self._run_workers("load_model",
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 183, in _run_workers
    driver_worker_output = run_method(self.driver_worker, sent_method,
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
    return func(*args, **kwargs)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
    self.model_runner.load_model()
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
    self.model = get_model(vllm_config=self.vllm_config)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
    return loader.load_model(vllm_config=vllm_config)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 377, in load_model
    model = _initialize_model(vllm_config=vllm_config)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 119, in _initialize_model
    return model_class(vllm_config=vllm_config, prefix=prefix)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 488, in __init__
    self.model = self._init_model(vllm_config=vllm_config,
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 525, in _init_model
    return LlamaModel(vllm_config=vllm_config, prefix=prefix)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 149, in __init__
    old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 321, in __init__
    self.start_layer, self.end_layer, self.layers = make_layers(
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 555, in make_layers
    [PPMissingLayer() for _ in range(start_layer)] + [
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 556, in <listcomp>
    maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 323, in <lambda>
    lambda prefix: layer_type(config=config,
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 234, in __init__
    self.self_attn = LlamaAttention(
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 135, in __init__
    self.qkv_proj = QKVParallelLinear(
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 728, in __init__
    super().__init__(input_size=input_size,
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 311, in __init__
    self.quant_method.create_weights(
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 127, in create_weights
    weight = Parameter(torch.empty(sum(output_partition_sizes),
  File "/usr/local/python3.10/lib/python3.10/site-packages/torch/utils/_device.py", line 106, in __torch_function__
    return func(*args, **kwargs)
RuntimeError: NPU out of memory. Tried to allocate 22.00 MiB (NPU 0; 60.97 GiB total capacity; 10.24 GiB already allocated; 10.24 GiB current active; 19.52 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
INFO 03-05 07:15:41 multiproc_worker_utils.py:139] Terminating local vLLM worker processes
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
Traceback (most recent call last):
  File "/usr/local/python3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/python3.10/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 909, in <module>
    uvloop.run(run_server(args))
  File "/usr/local/python3.10/lib/python3.10/site-packages/uvloop/__init__.py", line 82, in run
    return loop.run_until_complete(wrapper())
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/python3.10/lib/python3.10/site-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 873, in run_server
    async with build_async_engine_client(args) as engine_client:
  File "/usr/local/python3.10/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 134, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
  File "/usr/local/python3.10/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 228, in build_async_engine_client_from_engine_args
    raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.
[ERROR] 2025-03-05-07:15:42 (PID:20129, Device:-1, RankID:-1) ERR99999 UNKNOWN applicaiton exception
root@4ffad0458746:/# /usr/local/python3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 210 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
/usr/local/python3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

@Yikun
Copy link
Collaborator Author

Yikun commented Mar 5, 2025

RuntimeError: NPU out of memory. Tried to allocate 22.00 MiB (NPU 0; 60.97 GiB total capacity; 10.24 GiB already allocated; 10.24 GiB current active; 19.52 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.

@WWCTF According the log, it's a OOM here, IIUC, please try small one or try to use multi-node: https://vllm-ascend.readthedocs.io/en/latest/tutorials.html#online-serving-on-multi-machine

@WWCTF
Copy link

WWCTF commented Mar 6, 2025

RuntimeError: NPU out of memory. Tried to allocate 22.00 MiB (NPU 0; 60.97 GiB total capacity; 10.24 GiB already allocated; 10.24 GiB current active; 19.52 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.

@WWCTF According the log, it's a OOM here, IIUC, please try small one or try to use multi-node: https://vllm-ascend.readthedocs.io/en/latest/tutorials.html#online-serving-on-multi-machine

实际报错内容如下:

root@4ffad0458746:/# python3 -m vllm.entrypoints.openai.api_server --model /workspace/models/DeepSeek-R1-Distill-70B/ --tensor-parallel-size 8 --gpu-memory-utilization 0.95
INFO 03-06 01:16:36 init.py:28] Available plugins for group vllm.platform_plugins:
INFO 03-06 01:16:36 init.py:30] name=ascend, value=vllm_ascend:register
INFO 03-06 01:16:36 init.py:32] all available plugins for group vllm.platform_plugins will be loaded.
INFO 03-06 01:16:36 init.py:34] set environment variable VLLM_PLUGINS to control which plugins to load.
INFO 03-06 01:16:36 init.py:42] plugin ascend loaded.
INFO 03-06 01:16:37 init.py:28] Available plugins for group vllm.platform_plugins:
INFO 03-06 01:16:37 init.py:30] name=ascend, value=vllm_ascend:register
INFO 03-06 01:16:37 init.py:32] all available plugins for group vllm.platform_plugins will be loaded.
INFO 03-06 01:16:37 init.py:34] set environment variable VLLM_PLUGINS to control which plugins to load.
INFO 03-06 01:16:37 init.py:42] plugin ascend loaded.
INFO 03-06 01:16:37 init.py:187] No platform detected, vLLM is running on UnspecifiedPlatform
WARNING 03-06 01:16:37 _custom_ops.py:19] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
INFO 03-06 01:16:37 init.py:174] Platform plugin ascend is activated
INFO 03-06 01:16:39 api_server.py:838] vLLM API server version 0.7.1
INFO 03-06 01:16:39 api_server.py:839] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=[''], allowed_methods=[''], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, enable_reasoning=False, reasoning_parser=None, tool_call_parser=None, tool_parser_plugin='', model='/workspace/models/DeepSeek-R1-Distill-70B/', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=None, guided_decoding_backend='xgrammar', logits_processor_pattern=None, distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=8, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.95, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False)
INFO 03-06 01:16:39 api_server.py:204] Started engine process with PID 25556
INFO 03-06 01:16:46 init.py:28] Available plugins for group vllm.platform_plugins:
INFO 03-06 01:16:46 init.py:30] name=ascend, value=vllm_ascend:register
INFO 03-06 01:16:46 init.py:32] all available plugins for group vllm.platform_plugins will be loaded.
INFO 03-06 01:16:46 init.py:34] set environment variable VLLM_PLUGINS to control which plugins to load.
INFO 03-06 01:16:46 init.py:42] plugin ascend loaded.
INFO 03-06 01:16:47 init.py:28] Available plugins for group vllm.platform_plugins:
INFO 03-06 01:16:47 init.py:30] name=ascend, value=vllm_ascend:register
INFO 03-06 01:16:47 init.py:32] all available plugins for group vllm.platform_plugins will be loaded.
INFO 03-06 01:16:47 init.py:34] set environment variable VLLM_PLUGINS to control which plugins to load.
INFO 03-06 01:16:47 init.py:42] plugin ascend loaded.
INFO 03-06 01:16:47 init.py:187] No platform detected, vLLM is running on UnspecifiedPlatform
WARNING 03-06 01:16:47 _custom_ops.py:19] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
INFO 03-06 01:16:47 init.py:174] Platform plugin ascend is activated
INFO 03-06 01:16:50 config.py:526] This model supports multiple tasks: {'classify', 'generate', 'score', 'embed', 'reward'}. Defaulting to 'generate'.
INFO 03-06 01:16:50 config.py:1383] Defaulting to use mp for distributed inference
WARNING 03-06 01:16:50 arg_utils.py:1129] The model has a long context length (131072). This may cause OOM errors during the initial memory profiling phase, or result in low performance due to small KV cache space. Consider setting --max-model-len to a smaller value.
INFO 03-06 01:16:50 importing.py:14] Triton not installed or not compatible; certain GPU-related functions will not be available.
INFO 03-06 01:17:00 config.py:526] This model supports multiple tasks: {'embed', 'score', 'reward', 'generate', 'classify'}. Defaulting to 'generate'.
INFO 03-06 01:17:00 config.py:1383] Defaulting to use mp for distributed inference
WARNING 03-06 01:17:00 arg_utils.py:1129] The model has a long context length (131072). This may cause OOM errors during the initial memory profiling phase, or result in low performance due to small KV cache space. Consider setting --max-model-len to a smaller value.
INFO 03-06 01:17:00 importing.py:14] Triton not installed or not compatible; certain GPU-related functions will not be available.
INFO 03-06 01:17:00 llm_engine.py:232] Initializing a V0 LLM engine (v0.7.1) with config: model='/workspace/models/DeepSeek-R1-Distill-70B/', speculative_config=None, tokenizer='/workspace/models/DeepSeek-R1-Distill-70B/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=auto, tensor_parallel_size=8, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=npu, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=/workspace/models/DeepSeek-R1-Distill-70B/, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True,
WARNING 03-06 01:17:01 multiproc_worker_utils.py:298] Reducing Torch parallelism from 192 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
(VllmWorkerProcess pid=25837) INFO 03-06 01:17:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
(VllmWorkerProcess pid=25839) INFO 03-06 01:17:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
(VllmWorkerProcess pid=25841) INFO 03-06 01:17:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
(VllmWorkerProcess pid=25843) INFO 03-06 01:17:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
(VllmWorkerProcess pid=25845) INFO 03-06 01:17:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
(VllmWorkerProcess pid=25847) INFO 03-06 01:17:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
(VllmWorkerProcess pid=25849) INFO 03-06 01:17:01 multiproc_worker_utils.py:227] Worker ready; awaiting tasks
INFO 03-06 01:17:20 init.py:28] Available plugins for group vllm.platform_plugins:
INFO 03-06 01:17:20 init.py:30] name=ascend, value=vllm_ascend:register
INFO 03-06 01:17:20 init.py:32] all available plugins for group vllm.platform_plugins will be loaded.
INFO 03-06 01:17:20 init.py:34] set environment variable VLLM_PLUGINS to control which plugins to load.
INFO 03-06 01:17:20 init.py:42] plugin ascend loaded.
INFO 03-06 01:17:20 init.py:28] Available plugins for group vllm.platform_plugins:
INFO 03-06 01:17:20 init.py:30] name=ascend, value=vllm_ascend:register
INFO 03-06 01:17:20 init.py:32] all available plugins for group vllm.platform_plugins will be loaded.
INFO 03-06 01:17:20 init.py:34] set environment variable VLLM_PLUGINS to control which plugins to load.
INFO 03-06 01:17:20 init.py:42] plugin ascend loaded.
INFO 03-06 01:17:20 init.py:187] No platform detected, vLLM is running on UnspecifiedPlatform
WARNING 03-06 01:17:20 _custom_ops.py:19] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
INFO 03-06 01:17:20 init.py:174] Platform plugin ascend is activated
INFO 03-06 01:17:34 shm_broadcast.py:256] vLLM message queue communication handle: Handle(connect_ip='127.0.0.1', local_reader_ranks=[1, 2, 3, 4, 5, 6, 7], buffer_handle=(7, 4194304, 6, 'psm_95969e17'), local_subscribe_port=56571, remote_subscribe_port=None)
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return func(*args, **kwargs)
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] self.model_runner.load_model()
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/init.py", line 12, in get_model
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 380, in load_model
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in load_weights
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 233, in load_weights
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] autoloaded_weights = set(self._load_module("", self.module, weights))
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 185, in _load_module
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] for child_prefix, child_weights in self._groupby_prefix(weights):
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 99, in _groupby_prefix
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] for prefix, group in itertools.groupby(weights_by_parts,
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 96, in
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] weights_by_parts = ((weight_name.split(".", 1), weight_data)
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 355, in _get_all_weights
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] yield from self._get_weights_iterator(primary_weights)
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 308, in _get_weights_iterator
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] hf_folder, hf_weights_files, use_safetensors = self._prepare_weights(
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 292, in _prepare_weights
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] hf_weights_files = filter_duplicate_safetensors_files(
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/weight_utils.py", line 318, in filter_duplicate_safetensors_files
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] weight_map = json.load(f)["weight_map"]
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 293, in load
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return loads(fp.read(),
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 346, in loads
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return _default_decoder.decode(s)
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/decoder.py", line 340, in decode
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] raise JSONDecodeError("Extra data", s, end)
(VllmWorkerProcess pid=25843) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] json.decoder.JSONDecodeError: Extra data: line 731 column 1 (char 60338)
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return func(*args, **kwargs)
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] self.model_runner.load_model()
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/init.py", line 12, in get_model
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 380, in load_model
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in load_weights
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 233, in load_weights
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] autoloaded_weights = set(self._load_module("", self.module, weights))
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 185, in _load_module
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] for child_prefix, child_weights in self._groupby_prefix(weights):
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 99, in _groupby_prefix
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] for prefix, group in itertools.groupby(weights_by_parts,
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 96, in
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] weights_by_parts = ((weight_name.split(".", 1), weight_data)
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 355, in _get_all_weights
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] yield from self._get_weights_iterator(primary_weights)
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 308, in _get_weights_iterator
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] hf_folder, hf_weights_files, use_safetensors = self._prepare_weights(
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 292, in _prepare_weights
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] hf_weights_files = filter_duplicate_safetensors_files(
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/weight_utils.py", line 318, in filter_duplicate_safetensors_files
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] weight_map = json.load(f)["weight_map"]
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 293, in load
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return loads(fp.read(),
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 346, in loads
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return _default_decoder.decode(s)
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/decoder.py", line 340, in decode
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] raise JSONDecodeError("Extra data", s, end)
(VllmWorkerProcess pid=25841) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] json.decoder.JSONDecodeError: Extra data: line 731 column 1 (char 60338)
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return func(*args, **kwargs)
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] self.model_runner.load_model()
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/init.py", line 12, in get_model
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 380, in load_model
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in load_weights
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 233, in load_weights
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] autoloaded_weights = set(self._load_module("", self.module, weights))
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 185, in _load_module
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] for child_prefix, child_weights in self._groupby_prefix(weights):
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 99, in _groupby_prefix
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] for prefix, group in itertools.groupby(weights_by_parts,
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 96, in
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] weights_by_parts = ((weight_name.split(".", 1), weight_data)
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 355, in _get_all_weights
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] yield from self._get_weights_iterator(primary_weights)
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 308, in _get_weights_iterator
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] hf_folder, hf_weights_files, use_safetensors = self._prepare_weights(
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 292, in _prepare_weights
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] hf_weights_files = filter_duplicate_safetensors_files(
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/weight_utils.py", line 318, in filter_duplicate_safetensors_files
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] weight_map = json.load(f)["weight_map"]
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 293, in load
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return loads(fp.read(),
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 346, in loads
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] return _default_decoder.decode(s)
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/decoder.py", line 340, in decode
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] raise JSONDecodeError("Extra data", s, end)
(VllmWorkerProcess pid=25845) ERROR 03-06 01:17:35 multiproc_worker_utils.py:240] json.decoder.JSONDecodeError: Extra data: line 731 column 1 (char 60338)
ERROR 03-06 01:17:36 engine.py:387] Extra data: line 731 column 1 (char 60338)
ERROR 03-06 01:17:36 engine.py:387] Traceback (most recent call last):
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine
ERROR 03-06 01:17:36 engine.py:387] engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 121, in from_engine_args
ERROR 03-06 01:17:36 engine.py:387] return cls(ipc_path=ipc_path,
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 73, in init
ERROR 03-06 01:17:36 engine.py:387] self.engine = LLMEngine(*args, **kwargs)
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 271, in init
ERROR 03-06 01:17:36 engine.py:387] self.model_executor = executor_class(vllm_config=vllm_config, )
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 260, in init
ERROR 03-06 01:17:36 engine.py:387] super().init(*args, **kwargs)
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 49, in init
ERROR 03-06 01:17:36 engine.py:387] self._init_executor()
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 123, in _init_executor
ERROR 03-06 01:17:36 engine.py:387] self._run_workers("load_model",
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 183, in _run_workers
ERROR 03-06 01:17:36 engine.py:387] driver_worker_output = run_method(self.driver_worker, sent_method,
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
ERROR 03-06 01:17:36 engine.py:387] return func(*args, **kwargs)
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
ERROR 03-06 01:17:36 engine.py:387] self.model_runner.load_model()
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
ERROR 03-06 01:17:36 engine.py:387] self.model = get_model(vllm_config=self.vllm_config)
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/init.py", line 12, in get_model
ERROR 03-06 01:17:36 engine.py:387] return loader.load_model(vllm_config=vllm_config)
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 380, in load_model
ERROR 03-06 01:17:36 engine.py:387] loaded_weights = model.load_weights(
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in load_weights
ERROR 03-06 01:17:36 engine.py:387] return loader.load_weights(
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 233, in load_weights
ERROR 03-06 01:17:36 engine.py:387] autoloaded_weights = set(self._load_module("", self.module, weights))
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 185, in _load_module
ERROR 03-06 01:17:36 engine.py:387] for child_prefix, child_weights in self._groupby_prefix(weights):
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 99, in _groupby_prefix
ERROR 03-06 01:17:36 engine.py:387] for prefix, group in itertools.groupby(weights_by_parts,
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 96, in
ERROR 03-06 01:17:36 engine.py:387] weights_by_parts = ((weight_name.split(".", 1), weight_data)
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in
ERROR 03-06 01:17:36 engine.py:387] return loader.load_weights(
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 355, in _get_all_weights
ERROR 03-06 01:17:36 engine.py:387] yield from self._get_weights_iterator(primary_weights)
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 308, in _get_weights_iterator
ERROR 03-06 01:17:36 engine.py:387] hf_folder, hf_weights_files, use_safetensors = self._prepare_weights(
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 292, in _prepare_weights
ERROR 03-06 01:17:36 engine.py:387] hf_weights_files = filter_duplicate_safetensors_files(
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/weight_utils.py", line 318, in filter_duplicate_safetensors_files
ERROR 03-06 01:17:36 engine.py:387] weight_map = json.load(f)["weight_map"]
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 293, in load
ERROR 03-06 01:17:36 engine.py:387] return loads(fp.read(),
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 346, in loads
ERROR 03-06 01:17:36 engine.py:387] return _default_decoder.decode(s)
ERROR 03-06 01:17:36 engine.py:387] File "/usr/local/python3.10/lib/python3.10/json/decoder.py", line 340, in decode
ERROR 03-06 01:17:36 engine.py:387] raise JSONDecodeError("Extra data", s, end)
ERROR 03-06 01:17:36 engine.py:387] json.decoder.JSONDecodeError: Extra data: line 731 column 1 (char 60338)
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return func(*args, **kwargs)
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] self.model_runner.load_model()
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/init.py", line 12, in get_model
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 380, in load_model
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in load_weights
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 233, in load_weights
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] autoloaded_weights = set(self._load_module("", self.module, weights))
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 185, in _load_module
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] for child_prefix, child_weights in self._groupby_prefix(weights):
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 99, in _groupby_prefix
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] for prefix, group in itertools.groupby(weights_by_parts,
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 96, in
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] weights_by_parts = ((weight_name.split(".", 1), weight_data)
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 355, in _get_all_weights
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] yield from self._get_weights_iterator(primary_weights)
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 308, in _get_weights_iterator
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] hf_folder, hf_weights_files, use_safetensors = self._prepare_weights(
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 292, in _prepare_weights
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] hf_weights_files = filter_duplicate_safetensors_files(
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/weight_utils.py", line 318, in filter_duplicate_safetensors_files
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] weight_map = json.load(f)["weight_map"]
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 293, in load
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loads(fp.read(),
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 346, in loads
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return _default_decoder.decode(s)
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/decoder.py", line 340, in decode
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] raise JSONDecodeError("Extra data", s, end)
(VllmWorkerProcess pid=25837) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] json.decoder.JSONDecodeError: Extra data: line 731 column 1 (char 60338)
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return func(*args, **kwargs)
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] self.model_runner.load_model()
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/init.py", line 12, in get_model
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 380, in load_model
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in load_weights
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 233, in load_weights
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] autoloaded_weights = set(self._load_module("", self.module, weights))
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 185, in _load_module
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] for child_prefix, child_weights in self._groupby_prefix(weights):
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 99, in _groupby_prefix
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] for prefix, group in itertools.groupby(weights_by_parts,
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 96, in
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] weights_by_parts = ((weight_name.split(".", 1), weight_data)
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 355, in _get_all_weights
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] yield from self._get_weights_iterator(primary_weights)
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 308, in _get_weights_iterator
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] hf_folder, hf_weights_files, use_safetensors = self._prepare_weights(
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 292, in _prepare_weights
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] hf_weights_files = filter_duplicate_safetensors_files(
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/weight_utils.py", line 318, in filter_duplicate_safetensors_files
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] weight_map = json.load(f)["weight_map"]
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 293, in load
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loads(fp.read(),
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 346, in loads
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return _default_decoder.decode(s)
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/decoder.py", line 340, in decode
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] raise JSONDecodeError("Extra data", s, end)
(VllmWorkerProcess pid=25839) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] json.decoder.JSONDecodeError: Extra data: line 731 column 1 (char 60338)
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return func(*args, **kwargs)
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] self.model_runner.load_model()
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/init.py", line 12, in get_model
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 380, in load_model
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in load_weights
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 233, in load_weights
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] autoloaded_weights = set(self._load_module("", self.module, weights))
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 185, in _load_module
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] for child_prefix, child_weights in self._groupby_prefix(weights):
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 99, in _groupby_prefix
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] for prefix, group in itertools.groupby(weights_by_parts,
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 96, in
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] weights_by_parts = ((weight_name.split(".", 1), weight_data)
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 355, in _get_all_weights
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] yield from self._get_weights_iterator(primary_weights)
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 308, in _get_weights_iterator
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] hf_folder, hf_weights_files, use_safetensors = self._prepare_weights(
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 292, in _prepare_weights
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] hf_weights_files = filter_duplicate_safetensors_files(
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/weight_utils.py", line 318, in filter_duplicate_safetensors_files
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] weight_map = json.load(f)["weight_map"]
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 293, in load
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loads(fp.read(),
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 346, in loads
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return _default_decoder.decode(s)
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/decoder.py", line 340, in decode
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] raise JSONDecodeError("Extra data", s, end)
(VllmWorkerProcess pid=25849) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] json.decoder.JSONDecodeError: Extra data: line 731 column 1 (char 60338)
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] Traceback (most recent call last):
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 234, in _run_worker_process
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return func(*args, **kwargs)
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] self.model_runner.load_model()
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/init.py", line 12, in get_model
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 380, in load_model
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] loaded_weights = model.load_weights(
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in load_weights
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 233, in load_weights
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] autoloaded_weights = set(self._load_module("", self.module, weights))
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 185, in _load_module
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] for child_prefix, child_weights in self._groupby_prefix(weights):
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 99, in _groupby_prefix
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] for prefix, group in itertools.groupby(weights_by_parts,
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 96, in
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] weights_by_parts = ((weight_name.split(".", 1), weight_data)
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loader.load_weights(
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 355, in _get_all_weights
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] yield from self._get_weights_iterator(primary_weights)
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 308, in _get_weights_iterator
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] hf_folder, hf_weights_files, use_safetensors = self._prepare_weights(
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 292, in _prepare_weights
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] hf_weights_files = filter_duplicate_safetensors_files(
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/weight_utils.py", line 318, in filter_duplicate_safetensors_files
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] weight_map = json.load(f)["weight_map"]
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 293, in load
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return loads(fp.read(),
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/init.py", line 346, in loads
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] return _default_decoder.decode(s)
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] File "/usr/local/python3.10/lib/python3.10/json/decoder.py", line 340, in decode
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] raise JSONDecodeError("Extra data", s, end)
(VllmWorkerProcess pid=25847) ERROR 03-06 01:17:36 multiproc_worker_utils.py:240] json.decoder.JSONDecodeError: Extra data: line 731 column 1 (char 60338)
Process SpawnProcess-1:
Traceback (most recent call last):
File "/usr/local/python3.10/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/python3.10/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 389, in run_mp_engine
raise e
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 121, in from_engine_args
return cls(ipc_path=ipc_path,
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 73, in init
self.engine = LLMEngine(*args, **kwargs)
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 271, in init
self.model_executor = executor_class(vllm_config=vllm_config, )
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 260, in init
super().init(*args, **kwargs)
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 49, in init
self._init_executor()
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 123, in _init_executor
self._run_workers("load_model",
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 183, in _run_workers
driver_worker_output = run_method(self.driver_worker, sent_method,
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
return func(*args, **kwargs)
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/worker.py", line 188, in load_model
self.model_runner.load_model()
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm_ascend/model_runner.py", line 830, in load_model
self.model = get_model(vllm_config=self.vllm_config)
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/init.py", line 12, in get_model
return loader.load_model(vllm_config=vllm_config)
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 380, in load_model
loaded_weights = model.load_weights(
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in load_weights
return loader.load_weights(
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 233, in load_weights
autoloaded_weights = set(self._load_module("", self.module, weights))
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 185, in _load_module
for child_prefix, child_weights in self._groupby_prefix(weights):
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 99, in _groupby_prefix
for prefix, group in itertools.groupby(weights_by_parts,
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 96, in
weights_by_parts = ((weight_name.split(".", 1), weight_data)
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 565, in
return loader.load_weights(
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 355, in _get_all_weights
yield from self._get_weights_iterator(primary_weights)
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 308, in _get_weights_iterator
hf_folder, hf_weights_files, use_safetensors = self._prepare_weights(
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 292, in _prepare_weights
hf_weights_files = filter_duplicate_safetensors_files(
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/model_executor/model_loader/weight_utils.py", line 318, in filter_duplicate_safetensors_files
weight_map = json.load(f)["weight_map"]
File "/usr/local/python3.10/lib/python3.10/json/init.py", line 293, in load
return loads(fp.read(),
File "/usr/local/python3.10/lib/python3.10/json/init.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/python3.10/lib/python3.10/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 731 column 1 (char 60338)
INFO 03-06 01:17:37 multiproc_worker_utils.py:139] Terminating local vLLM worker processes
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
[ERROR] TBE Subprocess[task_distribute] raise error[], main process disappeared!
Traceback (most recent call last):
File "/usr/local/python3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/python3.10/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 909, in
uvloop.run(run_server(args))
File "/usr/local/python3.10/lib/python3.10/site-packages/uvloop/init.py", line 82, in run
return loop.run_until_complete(wrapper())
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/python3.10/lib/python3.10/site-packages/uvloop/init.py", line 61, in wrapper
return await main
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 873, in run_server
async with build_async_engine_client(args) as engine_client:
File "/usr/local/python3.10/lib/python3.10/contextlib.py", line 199, in aenter
return await anext(self.gen)
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 134, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
File "/usr/local/python3.10/lib/python3.10/contextlib.py", line 199, in aenter
return await anext(self.gen)
File "/usr/local/python3.10/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 228, in build_async_engine_client_from_engine_args
raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.
[ERROR] 2025-03-06-01:17:42 (PID:25483, Device:-1, RankID:-1) ERR99999 UNKNOWN applicaiton exception
root@4ffad0458746:/# /usr/local/python3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 210 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
/usr/local/python3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants