LangStream Documentation
Langstream.aiLangStream GitHub RepoChangelog
  • LangStream Documentation
  • ❤️Langstream.ai
  • ⭐LangStream GitHub Repo
  • 📜Changelog
  • about
    • What is LangStream?
    • License
  • Get Started
  • installation
    • LangStream CLI
    • Docker
    • Minikube (mini-langstream)
    • Kubernetes
    • Build and install from source
  • Building Applications
    • Vector Databases
    • Application structure
      • Pipelines
      • Instances
      • Configuration
      • Topics
      • Assets
      • Secrets
      • YAML templating
      • Error Handling
      • Stateful agents
      • .langstreamignore
    • Sample App
    • Develop, test and deploy
    • Application Lifecycle
    • Expression Language
    • API Gateways
      • Websocket
      • HTTP
      • Message filtering
      • Gateway authentication
    • API Reference
      • Agents
      • Resources
      • Assets
  • LangStream CLI
    • CLI Commands
    • CLI Configuration
    • Web interface
  • Integrations
    • Large Language Models (LLMs)
      • OpenAI
      • Hugging Face
      • Google Vertex AI
      • Amazon Bedrock
      • Ollama
    • Data storage
      • Astra Vector DB
      • Astra
      • Cassandra
      • Pinecone
      • Milvus
      • Solr
      • JDBC
      • OpenSearch
    • Integrations
      • Apache Kafka Connect
      • Apache Camel
    • LangServe
  • Pipeline Agents
    • Agent Messaging
    • Builtin agents
      • Input & Output
        • webcrawler-source
        • s3-source
        • azure-blob-storage-source
        • sink
        • vector-db-sink
        • camel-source
      • AI Agents
        • ai-chat-completions
        • ai-text-completions
        • compute-ai-embeddings
        • flare-controller
      • Text Processors
        • document-to-json
        • language-detector
        • query
        • query-vector-db
        • re-rank
        • text-normaliser
        • text-extractor
        • text-splitter
        • http-request
      • Data Transform
        • cast
        • compute
        • drop
        • drop-fields
        • merge-key-value
        • unwrap-key-value
      • Flow control
        • dispatch
        • timer-source
        • trigger-event
    • Custom Agents
      • Python sink
      • Python source
      • Python processor
      • Python service
    • Agent Developer Guide
      • Agent Types
      • Agent Creation
      • Configuration and Testing
      • Environment variables
  • Messaging
    • Messaging
      • Apache Pulsar
      • Apache Kafka
      • Pravega.io
  • Patterns
    • RAG pattern
    • FLARE pattern
  • Examples
    • LangServe chatbot
    • LlamaIndex Cassandra sink
Powered by GitBook
  1. Integrations

LangServe

Last updated 1 year ago

is a popular runtime to execute LangChain applications.

LangStream natively integrates with LangServe and allows you to invoke services exposed by LangServe applications.

Use the built-in langserve-invoke agent to implement this integration.

This example invokes a LangServe application that exposes a service at http://localhost:8000/chain/stream.

topics:
  - name: "input-topic"
    creation-mode: create-if-not-exists
  - name: "output-topic"
    creation-mode: create-if-not-exists
  - name: "streaming-answers-topic"
    creation-mode: create-if-not-exists
pipeline:
  - type: "langserve-invoke"
    input: input-topic
    output: output-topic
    id: step1
    configuration:
      output-field: value.answer
      stream-to-topic: streaming-answers-topic
      stream-response-field: value
      min-chunks-per-message: 10
      debug: false
      method: POST
      allow-redirects: true
      handle-cookies: false
      url: "http://host.docker.internal:8000/chain/stream"
      fields:
        - name: topic
          expression: "value"

When you run the LangStream application in docker the URL is http://host.docker.internal:8000/chain/stream due to how docker desktop works.

To allow your LangStream application to be accessible from a UI, you have to configure a gateway:

gateways:
  - id: chat
    type: chat
    chat-options:
      answers-topic: streaming-answers-topic
      questions-topic: input-topic
      headers:
        - value-from-parameters: session-id

Starting the LangServe application locally

This is the sample code of the LangServe application:

from fastapi import FastAPI
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langserve import add_routes


app = FastAPI(
    title="LangChain Server",
    version="1.0",
    description="A simple api server using Langchain's Runnable interfaces",
)

model = ChatOpenAI()
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
add_routes(
    app,
    prompt | model,
    path="/chain",
    )

if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="localhost", port=8000)

Start the LangServe application with the following command:

export OPENAI_API_KEY=...
pip install fastapi langserve langchain openai sse_starlette uvicorn
python example.py

Starting the LangStream application locally

To run the LangStream application on docker locally:

langstream docker run -app /path/to/applicationn 

The LangStream UI will be running at

LangServe
http://localhost:8092/