llm
AI Chat
Type:
llm
• Category:flow
• Tags:llm
,ai
,text
Description
Generate response using AI
Parameters
Name | Type | Description | Required | Default |
---|---|---|---|---|
system | string | System / role prompt | no | |
prompt | string | User prompt | no |
Help
Overview
The AI Response Generator worker produces a textual reply by invoking a language‑model backend. It accepts a system prompt that defines the model’s role or behavior, and a user prompt that contains the actual request. The worker then returns the model’s generated response. This component can be used wherever automated, context‑aware text generation is required (e.g., chat bots, content assistance, or data‑driven summarisation).
Inputs
system
(string) – The system‑level instruction that sets the model’s persona, tone, or constraints.prompt
(string) – The user‑level query or command that the model should answer.
Both fields are required; omitting either will cause the worker to return an error.
Minimal Example
Below is a concise example showing how to call the worker with a JSON payload (the exact transport mechanism—HTTP, SDK, etc.—may vary).
{
"system": "You are a helpful assistant that provides concise answers.",
"prompt": "What are the main benefits of using renewable energy?"
}
Typical response
{
"response": "Renewable energy reduces greenhouse‑gas emissions, lowers dependence on fossil fuels, and can lower electricity costs over time."
}
Replace the system
and prompt
values with the context appropriate for your application, then submit the payload to the worker endpoint to obtain the generated text.