-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
updated Embeddings.ipynb to use latest python sdk #521
base: main
Are you sure you want to change the base?
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
@@ -72,7 +72,7 @@ | |||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #2. result = client.models.embed_content(model="text-embedding-004", content=text)
Please run the colab and make sure everything is correct.
Reply via ReviewNB
@@ -72,7 +72,7 @@ | |||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #5. print(str(result.embeddings)[:50], '... TRIMMED]')
This isn't quite the same output as before... Can you use result.embeddings[0].values
instead of result.rembeddings
here?
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -72,7 +72,7 @@ | |||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #1. print(len(result.embeddings)) # The embeddings have 768 dimensions
This is also incorrect.
Since this is batching/unbatching, it might be clearer to use lists as input and unpack the results when they're generated. e.g.
texts = ["Hello world"] result = client.models.embed_content(model="text-embedding-004", contents=texts) [embedding] = result.embeddings print(len(embedding.values))
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, i updated this as well
@@ -72,7 +72,7 @@ | |||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is broken too. Please run this & ensure it executes and that the outputs are correct.
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I have ran the entire notebook on colab all the outputs are correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See reviewnb comments
Thanks for the contibution!
No that doc comment looks wrong. It should work with the text004 model too (as seen in this notebook) edit: I'll get this updated. It'll take a little time to approve and propagate but thanks for pointing it out. |
Can anyone tell me if the syntax for taskType of the models.embedContent method has changed in the new python sdk.

I cheked the embeddings docs for it.
does this mean taskType can only be used for embedding-001 because in the embeddings notebook is using it with embedding-004.