AI Chat
Type:
llm• Category:flow• Tags:llm,ai,text
Description
Generate response using AI
Parameters
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
system | string | System / role prompt | no | |
prompt | string | User prompt | no |
Help
AI Chat Worker
Generates responses using Large Language Models through OpenRouter API.
How it works:
- Prompt Construction: Combines system prompt, user prompt, and optional context
- API Call: Sends request to OpenRouter with specified model and parameters
- Response Processing: Returns generated text and usage information
Parameters:
- system: System/role prompt to set AI behavior
- prompt: Main user prompt/question
- model: AI model to use (default: meta-llama/llama-3.1-8b-instruct)
- temperature: Creativity/randomness (0.0-1.0, default: 0.7)
- maxTokens: Maximum response length (default: 512)
- json: Force JSON response format (default: false)
- contextExpr: Expression for additional context data
Supported Models:
- meta-llama/llama-3.1-8b-instruct (default)
- gpt-4, gpt-3.5-turbo
- claude-3, claude-2
- gemini-pro
- And many more available through OpenRouter
Context Integration:
- contextExpr can reference workflow data and variables
- Automatically formatted as JSON in the prompt
- Useful for providing relevant data to the AI
Response Format:
- text: Generated response content
- usage: Token usage statistics (if available)
- error: Error message (if request failed)
Examples:
- Simple chat: prompt="Explain quantum computing in simple terms"
- With context: contextExpr="data.user_profile", prompt="Generate a personalized recommendation"
- Code generation: prompt="Write a Python function to calculate fibonacci numbers"
Cost Considerations:
- Different models have different pricing
- Monitor token usage for cost control
- Use appropriate model size for your needs
Common Use Cases:
- Text generation and summarization
- Code writing and explanation
- Data analysis and insights
- Customer support automation
- Content creation and editing
Notes:
- Requires OPENROUTER_KEY environment variable
- Responses are limited by maxTokens parameter
- JSON mode ensures structured output when needed