LangStream Documentation
Langstream.aiLangStream GitHub RepoChangelog
  • LangStream Documentation
  • ❤️Langstream.ai
  • ⭐LangStream GitHub Repo
  • 📜Changelog
  • about
    • What is LangStream?
    • License
  • Get Started
  • installation
    • LangStream CLI
    • Docker
    • Minikube (mini-langstream)
    • Kubernetes
    • Build and install from source
  • Building Applications
    • Vector Databases
    • Application structure
      • Pipelines
      • Instances
      • Configuration
      • Topics
      • Assets
      • Secrets
      • YAML templating
      • Error Handling
      • Stateful agents
      • .langstreamignore
    • Sample App
    • Develop, test and deploy
    • Application Lifecycle
    • Expression Language
    • API Gateways
      • Websocket
      • HTTP
      • Message filtering
      • Gateway authentication
    • API Reference
      • Agents
      • Resources
      • Assets
  • LangStream CLI
    • CLI Commands
    • CLI Configuration
    • Web interface
  • Integrations
    • Large Language Models (LLMs)
      • OpenAI
      • Hugging Face
      • Google Vertex AI
      • Amazon Bedrock
      • Ollama
    • Data storage
      • Astra Vector DB
      • Astra
      • Cassandra
      • Pinecone
      • Milvus
      • Solr
      • JDBC
      • OpenSearch
    • Integrations
      • Apache Kafka Connect
      • Apache Camel
    • LangServe
  • Pipeline Agents
    • Agent Messaging
    • Builtin agents
      • Input & Output
        • webcrawler-source
        • s3-source
        • azure-blob-storage-source
        • sink
        • vector-db-sink
        • camel-source
      • AI Agents
        • ai-chat-completions
        • ai-text-completions
        • compute-ai-embeddings
        • flare-controller
      • Text Processors
        • document-to-json
        • language-detector
        • query
        • query-vector-db
        • re-rank
        • text-normaliser
        • text-extractor
        • text-splitter
        • http-request
      • Data Transform
        • cast
        • compute
        • drop
        • drop-fields
        • merge-key-value
        • unwrap-key-value
      • Flow control
        • dispatch
        • timer-source
        • trigger-event
    • Custom Agents
      • Python sink
      • Python source
      • Python processor
      • Python service
    • Agent Developer Guide
      • Agent Types
      • Agent Creation
      • Configuration and Testing
      • Environment variables
  • Messaging
    • Messaging
      • Apache Pulsar
      • Apache Kafka
      • Pravega.io
  • Patterns
    • RAG pattern
    • FLARE pattern
  • Examples
    • LangServe chatbot
    • LlamaIndex Cassandra sink
Powered by GitBook
On this page
  • Features
  • Use Cases
  • Architecture
Edit on GitHub

LangStream Documentation

Build and run Gen AI applications with ease

NextWhat is LangStream?

Last updated 1 year ago

LangStream is a framework for building and running Generative AI (Gen AI) applications. It is designed to make it easy to build and run Gen AI applications that can process data in real-time. You can combine the power of Large Language Models (LLMs) like GPT-4 and vector databases like Astra DB and Pinecone with the agility of stream processing to create powerful Gen AI applications.

Using LangStream you can develop and test your Gen AI applications on your laptop and then deploy to a production environment powered by Kubernetes and Kafka with a single CLI command.

LangStream applications are fundamentally event-driven. This architecture makes it easy to build reactive Gen AI applications that have scalability, fault-tolerance, and high availability.

To make it easy to build Gen AI applications, LangStream comes with several pre-built, configuration-driven agents. There are agents for working with AI chat APIs, vector databases, and text processing, to name a few. If the pre-built agents don't meet your needs, you can easily create your own agents using Python. The LangStream runtime comes preloaded with recent versions of popular Gen AI libraries like LangChain and LlamaIndex.

Features

  • Built on top of proven production technologies like and

  • Pre-built integrations with LLM services like , and

  • Pre-built integration with vector embedding services from Open AI, Google, and Hugging Face. Also includes the ability to download and run open-source embedding models from Hugging Face.

  • Pre-built integration with vector databases like and

  • Prompt templating that combines event data, semantic search, database queries, and more for generating prompts with rich context

  • Unstructured (PDF, Word, HTML, etc) and structured data processing

  • Run Kafka Connect sinks and sources for real-time integration with external systems

  • Prometheus metrics and Kubernetes logs for observability

  • for developing and debugging applications

Use Cases

  • Q&A chatbots over private data using Retrieval-Augmented Generation (RAG)

  • Vector embedding pipelines for managing the lifecycle of vector embeddings

  • Automatic text summarization pipelines

  • Personalized recommendation pipelines

Architecture

Get started

Kubernetes
Apache Kafka
ChatGPT
Google Vertex AI
HuggingFace
Pinecone
Astra DB Vector
VS Code Extension
here!