Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix manual trigger without cache + accept always on pressing a Tab #25

Merged
merged 2 commits into from
Feb 7, 2025

Conversation

igardev
Copy link
Collaborator

@igardev igardev commented Feb 6, 2025

No description provided.

@ggerganov ggerganov merged commit 3ae637e into master Feb 7, 2025
ggerganov added a commit that referenced this pull request Feb 8, 2025
* initial openai compatible api endpoint integration

* fix watch

* added openAiClientModel to config; tested with local vllm server

* fixed config and completions to work with FIM models by default

* remove unnecessary try catch

* core : remove repeating suffix of a suggestion + fix speculative FIM (#18)

* Remove repeating suffix of a suggestion

* If linesuffix is empty - cut the repeating suffix of the suggestion.

* If there is a linesuffix, suggest only one line, don't make hidden second request

* Fix the caching of the future suggestion in case of max inputPrefix length.

---------

Co-authored-by: igardev <[email protected]>

* core : disable trimming of suggestions

* release : v0.0.6

* readme : add CPU-only configs

* fixed configuration/settings UI

* fixed conflicts

* fix watch

* fixed

* fixes

* update version

* readme : add example

* core : fix cutting the lines of a suggestion (#22)

* Fix the problem with cutting the lines of a suggestion after the first one.

* Remove the less important checks on cutting the suggestion.

---------

Co-authored-by: igardev <[email protected]>

* Fix manual trigger without cache + accept always on pressing a Tab (#25)

* Ensure Ctrl+Shift+L always makes a new request to the servers.

* If a suggestion is visible - pressing a Tab always accepts it.

---------

Co-authored-by: igardev <[email protected]>

* fixed conflicts

* fix watch

* fixed

* fixes

* initial openai compatible api endpoint integration

* added openAiClientModel to config; tested with local vllm server

* fixed config and completions to work with FIM models by default

* fixed

* make api key optional for openai compatible endpoints as well

* updated to work with llama.cpp without api key

* removed this.handleOpenAICompletion() call from prepareLlamaForNextCompletion per @igardev

* updated package-lock.json after build

---------

Co-authored-by: igardev <[email protected]>
Co-authored-by: igardev <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants