Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc]Add benchmark scripts #74

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

Conversation

Potabk
Copy link
Contributor

@Potabk Potabk commented Feb 17, 2025

What this PR does / why we need it?

The purpose of this PR is to add benchmark scripts for npu, developers can easily run performance tests on their own machines with one line of code .

Does this PR introduce any user-facing change?

How was this patch tested?

@Potabk Potabk changed the title [Misc]dd benchmark scripts [Misc]Add benchmark scripts Feb 17, 2025
@Potabk Potabk changed the title [Misc]Add benchmark scripts [Misc][WIP]Add benchmark scripts Feb 17, 2025
@Potabk Potabk changed the title [Misc][WIP]Add benchmark scripts [Doc]Add benchmark scripts Feb 26, 2025
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
Signed-off-by: wangli <[email protected]>
Copy link
Collaborator

@Yikun Yikun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

emm, only review on tutorials.md and bechmark_latency.py.

The problem is that should we copy vllm benchmark here or just use it?

@@ -308,4 +308,54 @@ Logs of the vllm server:
```
INFO: 127.0.0.1:59384 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 02-19 17:37:35 metrics.py:453] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 1.9 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
```

## Performance Benchmark
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be developer guide

@@ -0,0 +1,193 @@
# SPDX-License-Identifier: Apache-2.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#
# Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved.
# This file is a part of the vllm-ascend project.
# Adapted from vllm-project/vllm/benchmarks/backend_request_func.py
# Copyright 2023 The vLLM team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

**kwargs,
)

ASYNC_REQUEST_FUNCS = {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls add note for the different with vLLM.

@@ -0,0 +1,152 @@
# SPDX-License-Identifier: Apache-2.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants