-
Notifications
You must be signed in to change notification settings - Fork 776
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Usage of Ollama and local models #1079
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Reviewed everything up to 3e3f4de in 29 seconds
More details
- Looked at
486
lines of code in5
files - Skipped
0
files when reviewing. - Skipped posting
5
drafted comments based on config settings.
1. docker-compose.yml:31
- Draft comment:
Adding files directly to volumes indocker-compose.yml
is not a best practice as it can lead to issues with file synchronization and container portability. Consider using a Dockerfile to copy necessary files into the image instead. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable:
The comment addresses a potential issue with the changes made in the diff, specifically the addition of files to volumes indocker-compose.yml
. This practice can indeed lead to issues with file synchronization and container portability, which are valid concerns. The suggestion to use a Dockerfile is a common best practice in Docker usage.
The comment might be too general and not consider specific use cases where adding files to volumes is necessary or beneficial. It assumes that using a Dockerfile is always the better option without considering the context.
While there might be specific cases where adding files to volumes is justified, the comment highlights a general best practice that is applicable in many scenarios. It is a useful suggestion for improving code quality and maintainability.
The comment is relevant to the changes made and provides a valid suggestion for improving the Docker setup. It should be kept as it addresses a potential issue with the current implementation.
2. skyvern/forge/sdk/api/llm/api_handler_factory.py:223
- Draft comment:
There are multiple logging statements added for debugging purposes. Ensure to remove or adjust the verbosity of these logs before merging to avoid cluttering the log output. This applies to other similar logging statements in this file as well. - Reason this comment was not posted:
Confidence changes required:50%
The PR author mentioned that they added a lot of logging for debugging purposes, which should be removed before merging.
3. skyvern/forge/sdk/api/llm/api_handler_factory.py:395
- Draft comment:
Ensure that existing functionality is tested to prevent any breaking changes. Consider adding tests for the new functionality as well. - Reason this comment was not posted:
Comment did not seem useful.
4. skyvern/forge/sdk/api/llm/config_registry.py:46
- Draft comment:
Ensure that existing functionality is tested to prevent any breaking changes. Consider adding tests for the new functionality as well. - Reason this comment was not posted:
Marked as duplicate.
5. skyvern/forge/sdk/api/llm/models.py:36
- Draft comment:
Ensure that existing functionality is tested to prevent any breaking changes. Consider adding tests for the new functionality as well. - Reason this comment was not posted:
Marked as duplicate.
Workflow ID: wflow_DOBw5HxcicjTgvTn
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
haha I love to see terrible code (as a v1!!) |
# Enable Ollama | ||
- ENABLE_OLLAMA=true | ||
- LLM_KEY=OLLAMA_TEXT | ||
- OLLAMA_API_BASE=http://192.168.1.3:11434 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add option to persist model in memory
curl http://localhost:11434/api/generate -d '{
"model": "",
"keep_alive":
}'
@loktar00 hey, is this compactible with llama vision models ? |
Should be fine with it for the first step, it switches between models, vision then it goes to a language model. Only because Ollama doesn't support both at once, unless the Vision Support for Llama works now. |
You can rapidly load and unload ollama models via the api endpoint as suggested above, use keepalive 0 to immediately unload said model. |
Btw, check out this pull: Seems like the OP has issues with ollama based vision models for some reason |
This is terrible code and I know it, I don't expect this to be merged but this is here in the hopes someone else will be able to get it working a bit better.
Full caveat, this "works" but every task has actually failed. For the local models being used the prompts would definitely need to be adjusted, however I did get the correct answer on 2 occasions in the logs.
Summary:
```json
Things that exclude it from being merged and would love help with
docker-compose
Important
Adds support for Ollama models, modifies configurations, and includes debugging logs and Docker setup changes.
api_handler_factory.py
.```json
inapi_handler_factory.py
.models.py
.config.py
andconfig_registry.py
.docker-compose.yml
to include Ollama environment variables and volume mounts.api_handler_factory.py
.docker-compose.yml
.This description was created by for 3e3f4de. It will automatically update as commits are pushed.