chatbot.yaml contains the chain of agents that makes up your program, and the input and output topics that they communicate with.
topics:
- name: "input-topic"
creation-mode: create-if-not-exists
- name: "output-topic"
creation-mode: create-if-not-exists
pipeline:
- name: "ai-chat-completions"
type: "ai-chat-completions"
input: "input-topic"
output: "output-topic"
errors:
on-failure: skip
configuration:
model: "gpt-3.5-turbo"
completion-field: "value"
messages:
- role: user
content: "What can you tell me about {{ value}} ?"
You may notice that this "langstream docker run" command doesn't reference an instance.yaml file to define the application's runtime environment. Instead, "langstream docker run" uses a default instance.yaml with a Kafka broker inside the Docker container. This default configuration is:
Deploy your application from project-folder, here we're calling the deployed application sample-app:
langstream docker run sample-app -app ./application -s ./secrets.yaml
You should see a docker container starting and then running the application. To ensure your app is running, open a new terminal an inspect the status of the application:
docker ps
Result:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
421bb7e082bb ghcr.io/langstream/langstream-runtime-tester:0.0.21 "/app/entrypoint.sh" 2 minutes ago Up 2 minutes 0.0.0.0:8090-8091->8090-8091/tcp
And you can use the CLI to inspect the status of the application:
langstream apps get sample-app
Result:
ID STREAMING COMPUTE STATUS EXECUTORS REPLICAS
sample-app kafka kubernetes DEPLOYED 1/1 1/1
For the LangStream CLI the application appears to be running on "kubernetes", even if you are using the docker mode, this is because the docker container emulates partially the Kubernetes environment.