Ollama
The resource ollama-configuration
configures access to the Ollama models for embeddings and completions via the Ollama REST API.
Check out the docs at Ollama.ai to learn more about how to start it locally.
configuration.yaml
secrets.yaml
This example uses host.docker.internal at the default Ollama port 11434,. This works well if you are running Ollama on your local machine with LangStream in Docker.
Configuration
Check out the full configuration properties in the API Reference page.
Last updated