Update ollama to v0.6.5 (#2544)

This commit is contained in:
Alexander L. 2025-04-11 13:30:35 +02:00 committed by GitHub
parent fd43de1e61
commit 6fccf34117
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 4 additions and 7 deletions

View File

@ -8,7 +8,7 @@ services:
PROXY_AUTH_ADD: "false"
ollama:
image: ollama/ollama:0.6.4@sha256:476b956cbe76f22494f08400757ba302fd8ab6573965c09f1e1a66b2a7b0eb77
image: ollama/ollama:0.6.5@sha256:96b7667cb536ab69bfd5cc0c2bd1e29602218e076fe6d34f402b786f17b4fde0
environment:
OLLAMA_ORIGINS: "*"
OLLAMA_CONTEXT_LENGTH: 8192

View File

@ -3,7 +3,7 @@ id: ollama
name: Ollama
tagline: Self-host open source AI models like DeepSeek-R1, Llama, and more
category: ai
version: "0.6.4"
version: "0.6.5"
port: 11434
description: >-
Ollama allows you to download and run advanced AI models directly on your own hardware. Self-hosting AI models ensures full control over your data and protects your privacy.
@ -37,11 +37,8 @@ defaultPassword: ""
dependencies: []
releaseNotes: >-
Highlights:
- /api/show now includes model capabilities like vision
- Fixed out-of-memory errors with parallel requests on Gemma 3
- Improved Gemma 3's handling of multilingual characters
- Fixed context shifting issues in DeepSeek models
- Resolved Gemma 3 output degradation after 512/1024 tokens in 0.6.3
- Support for Mistral Small 3.1, the best performing vision model in its weight class
- Improved model loading times for Gemma 3 on network-backed filesystems
Full release notes are available at https://github.com/ollama/ollama/releases