Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RTX 4090 error when using demo1 #6

Open
xixingya opened this issue Jan 13, 2025 · 1 comment
Open

RTX 4090 error when using demo1 #6

xixingya opened this issue Jan 13, 2025 · 1 comment

Comments

@xixingya
Copy link

Traceback (most recent call last):
File "/usr/local/bin/lmcache_vllm", line 5, in
from lmcache_vllm.script import main
File "/usr/local/lib/python3.12/dist-packages/lmcache_vllm/init.py", line 4, in
from lmcache_vllm.vllm_injection import InitLMCacheEnvironment
File "/usr/local/lib/python3.12/dist-packages/lmcache_vllm/vllm_injection.py", line 15, in
from lmcache_vllm.vllm_adapter import (lmcache_get_config,
File "/usr/local/lib/python3.12/dist-packages/lmcache_vllm/vllm_adapter.py", line 32, in
LMCACHE_CUDA_STREAM = torch.cuda.Stream()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/cuda/streams.py", line 35, in new
return super().new(cls, priority=priority, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: forward compatibility was attempted on non supported HW
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Device-side assertions were explicitly omitted for this error check; the error probably arose while initializing the DSA handlers.

@xixingya
Copy link
Author

this is because my host machine using cuda 12.2 and the container using cuda 12.4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant