Instances
This is a declaration of what infrastructure will support the “streaming” and “compute” of the pipeline. Streaming refers to what messaging platform the topics will be created in. Compute refers to where the agents will run the pipeline steps.
Globals
Within instance.yaml, use "globals" to define values for parameters across your application.
For example, this instance defines topicName
as a global parameter with the value "input-topic":
The second global otherTopicName
uses an alternate declaration method where the value is loaded from a dotenv file containing a OTHER_TOPIC_NAME="value"
line. The :-
characters allow you to designate a default value - in this case, default-topic-name
.
The topicName
parameter can now be referenced wherever you need it, perhaps in your application's pipeline.yaml file:
You can also use these parameters when creating assets, as in CREATE TABLE IF NOT EXISTS ${globals.vectorKeyspace}.${globals.vectorTable}
.
Manifest
instance
Top-level node
globals
object
A set of name:value pairs that should be applied to all clusters.
Example:
tableName: "vsearch.products"
streamingCluster
object
The settings of the messaging platform use to stream data. See the ref below for more.
computeCluster
object
The settings of the cluster where agents process data. See the ref below for more.
streamingCluster
type
string
The type name of messaging platform to be used. Refer to the instance clusters area for supported types.
configuration
object
Configurations of the streaming platform. Refer to the instance clusters area for supported configuration.
DataStax Astra users
To use your Astra streaming tenant as the streaming cluster with LangStream, enable the Starlight for Kafka feature. Doing so will provide you with the needed bootstrap and security information to use the kafka type.
Read more about enabling Starlight for Kafka in Astra Streaming in the documentation and also in the learning site. Learn more about the Starlight for Kafka project here.
Last updated