LangStream Documentation
Langstream.aiLangStream GitHub RepoChangelog
  • LangStream Documentation
  • ❤️Langstream.ai
  • ⭐LangStream GitHub Repo
  • 📜Changelog
  • about
    • What is LangStream?
    • License
  • Get Started
  • installation
    • LangStream CLI
    • Docker
    • Minikube (mini-langstream)
    • Kubernetes
    • Build and install from source
  • Building Applications
    • Vector Databases
    • Application structure
      • Pipelines
      • Instances
      • Configuration
      • Topics
      • Assets
      • Secrets
      • YAML templating
      • Error Handling
      • Stateful agents
      • .langstreamignore
    • Sample App
    • Develop, test and deploy
    • Application Lifecycle
    • Expression Language
    • API Gateways
      • Websocket
      • HTTP
      • Message filtering
      • Gateway authentication
    • API Reference
      • Agents
      • Resources
      • Assets
  • LangStream CLI
    • CLI Commands
    • CLI Configuration
    • Web interface
  • Integrations
    • Large Language Models (LLMs)
      • OpenAI
      • Hugging Face
      • Google Vertex AI
      • Amazon Bedrock
      • Ollama
    • Data storage
      • Astra Vector DB
      • Astra
      • Cassandra
      • Pinecone
      • Milvus
      • Solr
      • JDBC
      • OpenSearch
    • Integrations
      • Apache Kafka Connect
      • Apache Camel
    • LangServe
  • Pipeline Agents
    • Agent Messaging
    • Builtin agents
      • Input & Output
        • webcrawler-source
        • s3-source
        • azure-blob-storage-source
        • sink
        • vector-db-sink
        • camel-source
      • AI Agents
        • ai-chat-completions
        • ai-text-completions
        • compute-ai-embeddings
        • flare-controller
      • Text Processors
        • document-to-json
        • language-detector
        • query
        • query-vector-db
        • re-rank
        • text-normaliser
        • text-extractor
        • text-splitter
        • http-request
      • Data Transform
        • cast
        • compute
        • drop
        • drop-fields
        • merge-key-value
        • unwrap-key-value
      • Flow control
        • dispatch
        • timer-source
        • trigger-event
    • Custom Agents
      • Python sink
      • Python source
      • Python processor
      • Python service
    • Agent Developer Guide
      • Agent Types
      • Agent Creation
      • Configuration and Testing
      • Environment variables
  • Messaging
    • Messaging
      • Apache Pulsar
      • Apache Kafka
      • Pravega.io
  • Patterns
    • RAG pattern
    • FLARE pattern
  • Examples
    • LangServe chatbot
    • LlamaIndex Cassandra sink
Powered by GitBook
On this page
  • Selecting an enviroment
  • Step 1. Running your application on docker
  • Step 2. Deploy your application on a LangStream cluster
Edit on GitHub
  1. Building Applications

Develop, test and deploy

PreviousSample AppNextApplication Lifecycle

Last updated 1 year ago

Selecting an enviroment

Step 1. Running your application on docker

Using the you can run your application directly:

 langstream docker run super-cool-app -app ./application  -s ./secrets.yaml

In this case the instance.yaml file is optional, if you don't provide it, the CLI will use a default configuration. The CLI starts a docker container, using the official LangStream docker images, of the same version of the CLI.

The container by default runs all the LangStream components, a Kafka Broker, an S3 service (MinIO) and an embedded vector database (HerdDB). This is ideal for testing your application locally.

The docker container exposes the Control Plane on the default port (8090) and the API Gateway on the default port (8091), so you can run most of the CLI commands against the local container, especially the commands to interact with the API Gateway.

When you kill the application with Ctrl-C, the environment is automatically disposed of. If you need to persist your topics or the S3 environment, then you have to build your own instance.yaml file and pass it using the "-i" flag.

Please refer to the documentation for more details about the docker run command.

Step 2. Deploy your application on a LangStream cluster

The CLI allows you to deploy and manage applications from your local environment. You can also use the VSCode extension to create and manage applications on a local or remote cluster.

This approach works well if you need to use a cloud enviroment or a SaaS service or if you need to leverage some advanced features like scaling or resource management.

Using the , a typical workflow is:

  1. Configure the CLI to connect to the remote cluster and set the tenant. If you are using a local minikube cluster, then this is not necessary, you can use the default configuration. If you are using a remote cluster, you have to configure the credentials, the tenant name, and the control plane address.

  2. When ready, deploy your application:

    langstream apps deploy super-cool-app -app ./application -i ./instance.yaml  -s ./secrets.yaml
  3. Watch the progress of the deployment by watching the logs:

    langstream apps logs "super-cool-app"
LangStream CLI
LangStream Docker
LangStream CLI