Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(test): update client-sdk tests to handle tool format parametrization better #1287

Merged
merged 3 commits into from
Feb 27, 2025

Conversation

ashwinb
Copy link
Contributor

@ashwinb ashwinb commented Feb 27, 2025

What does this PR do?

Tool format depends on the model. @ehhuang introduced a get_default_tool_prompt_format function for this purpose. We should use that instead of hacky model ID matching we had before.

Secondly, non llama models don't have this concept so testing with those models should work as is.

Test Plan

for distro in fireworks ollama; do
  LLAMA_STACK_CONFIG=$distro \
    pytest -s -v tests/client-sdk/inference/test_text_inference.py \
       --inference-model=meta-llama/Llama-3.2-3B-Instruct \
       --vision-inference-model=""
done

LLAMA_STACK_CONFIG=dev \
   pytest -s -v tests/client-sdk/inference/test_text_inference.py \
       --inference-model=openai/gpt-4o \
       --vision-inference-model=""

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Feb 27, 2025
type="text",
text=content,
)
return content
elif isinstance(content, TextContentItem):
return OpenAIChatCompletionContentPartTextParam(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually there is a tricky bug here also. our InterleavedContent type is Union[str, InterleavedContentItem, List[InterleavedContentItem] but OpenAI's does not have the second piece. this method (at the end), needs to wrap the value into an array if it is not an instance of str -- will fix.

tc = TestCase(test_case)

response = client_with_models.inference.chat_completion(
model_id=text_model_id,
messages=tc["messages"],
tools=tc["tools"],
tool_config={"tool_choice": "none", "tool_prompt_format": provider_tool_format},
tool_config={"tool_choice": "none", "tool_prompt_format": tool_prompt_format},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doens't just leaving this unspecified work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean for all these tests?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, jeebuz I could have avoided all this work? I feel like a doofus.

@ashwinb ashwinb merged commit 23b65b6 into main Feb 27, 2025
3 checks passed
@ashwinb ashwinb deleted the models_in_test branch February 27, 2025 05:16
srikanthbachala20 pushed a commit to srikanthbachala20/llama-stack that referenced this pull request Feb 27, 2025
…ion better (meta-llama#1287)

# What does this PR do?

Tool format depends on the model. @ehhuang introduced a
`get_default_tool_prompt_format` function for this purpose. We should
use that instead of hacky model ID matching we had before.

Secondly, non llama models don't have this concept so testing with those
models should work as is.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

```bash
for distro in fireworks ollama; do
  LLAMA_STACK_CONFIG=$distro \
    pytest -s -v tests/client-sdk/inference/test_text_inference.py \
       --inference-model=meta-llama/Llama-3.2-3B-Instruct \
       --vision-inference-model=""
done

LLAMA_STACK_CONFIG=dev \
   pytest -s -v tests/client-sdk/inference/test_text_inference.py \
       --inference-model=openai/gpt-4o \
       --vision-inference-model=""

```

[//]: # (## Documentation)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants