Agent Messaging
Agent message structure and templating
All of the agents accept 2 message formats - plain text and json formatted text. Plain text is simply an unstructured string like “this is a string”. JSON formatted text is a string with a structure like {"a": "this is a", "b":"json string"}.
Plain text input
If the input message to an agent is in plain text, then you refer to it as “{{% value}}”. For example, in the ai-chat-completions agent, if you wanted to append the text from the input message to a pre-formatted prompt, you would set the message configuration as:
Or in the query agent if you wanted to include the input message as a part of the query you would:
JSON text input
As a JSON formatted string, JSON text provides more structure for the agent to work with. You can reference individual values within the text. Refer to the entire JSON string as “{{% value}}” or you can refer to specific values as “{{% value.<label> }}”.
If the input message had the format:
Then you would have the following references available in the agent:
{{% value.first-name}}
joe
{{% value.last-name}}
schmo
{{% value.account-id}}
12345
{{% value.aDot-value}}
asdf
{{% value.address.street}}
123 street rd
{{% value.address.city}}
hollywood
{{% value.address.state}}
ca
The same query from the previous example could be expanded:
Output
All agents output results as JSON text, regardless if the input was plain text or not.
Implicit input and output topics
Pipeline agents were designed with debugging and flexibility in mind. All agents support monitoring an input topic for new messages and writing the result of its action to an output topic. This can provide useful debugging points to see how a given agent’s configuration was applied to data. It can also introduce processing optimizations for managing topic backpressures and message backlogs.
Some agents don’t need an input or output specified. In those cases, when LangStream makes the pipeline runnable, it will bypass the use of a message topic and “feed” data directly to the next step. This offers better processing time but is a trade off to needing more memory and compute.
As a best practice, during pipeline development it is recommended to use input/output topics between all agent steps. Performance is not a worry and it helps to make processing more “visible”. When development is stable you could optionally remove some of the intermediate topics to aid processing time, but it’s not common to create a pipeline that does significant processing (3-7 steps) and not have some topics in between steps.
As you learn more about each agent’s capabilities, look for the “Input” and “Output” areas to identify if it supports implicit topics.
Example of a pipeline using topics between all agent steps:
The same example pipeline not using topics to transport data:
Adding input and output topics between agents also changes how many pods are deployed. For example, creating the below pipeline with no topics between agents feeds data directly to the next step, with all processing taking place in 1 pod. This offers better processing time, but is a trade off to needing more memory and compute.
Creating the same pipeline with topics between agents creates 3 worker pods and splits processing across them:
Last updated