Skip to main content

Signal Generator & Backtest Strategy - Build and Validate Trading Strategies Without Code

· 8 min read
ApudFlow OS
Platform Updates

Professional-grade trading strategy development has traditionally required expensive software, complex coding skills, and significant time investment. Today we're showcasing two powerful workers that transform how you build, test, and optimize trading strategies: the Signal Generator and Backtest Strategy with AI-powered optimization.

Key Advantages

1. Zero-Code Strategy Building Traditional platforms require learning scripting languages or programming. ApudFlow offers:

  • Visual drag-and-drop workflow builder
  • Point-and-click condition configuration
  • No programming knowledge needed

2. True AI Optimization (Not Just Grid Search) Most platforms call "optimization" what's really just exhaustive grid search. ApudFlow's AI:

  • Analyzes your data's volatility characteristics
  • Automatically determines appropriate parameter ranges
  • Uses recursive search to escape local optima
  • Tests trailing stops and time-based exits automatically

3. Integrated Execution Pipeline Build signals → Backtest → Deploy to live trading - all in one platform:

  • Connect directly to multiple brokers and exchanges
  • Real-time notifications via messaging apps
  • No code needed between backtest and live

What is Signal Generator?

The Signal Generator is a flexible condition-based signal engine that transforms your indicator data into actionable trading signals. Think of it as a visual "if-then" builder for trading rules.

Core Capabilities

FeatureDescription
15+ OperatorsNumeric (>, <, crosses_above), string (contains, matches)
Nested LogicBuild complex (A AND B) OR (C AND D) conditions
Field MathUse expressions like high - low or close * 2
Previous BarReference close[-1] for previous values
Percentage Functionspct_change(close), pct(high, open) for % calculations
Signal FilteringAvoid duplicates with first mode or cooldown

Example: RSI Mean Reversion Strategy

{
"long_conditions": [
{"left": "rsi", "operator": "crosses_above", "right": "30"}
],
"short_conditions": [
{"left": "rsi", "operator": "crosses_below", "right": "70"}
],
"close_mode": "reverse"
}

That's it! No coding required. The visual interface makes this even simpler with dropdowns and auto-complete.


What is Backtest Strategy?

The Backtest Strategy worker is a high-performance backtesting engine that evaluates your signals against historical data with realistic execution modeling.

Performance Highlights

  • 100,000+ bars in milliseconds - vectorized numpy operations
  • O(1) signal lookup - instant bar matching
  • Memory optimized - handles years of tick data

Complete Risk Management

Risk FeatureOptions
Stop LossPercent, ATR multiple, Fixed price
Take ProfitPercent, ATR, Risk:Reward ratio, Fixed
Trailing StopPercentage-based with auto-adjustment
Position SizingPercent of equity, Fixed amount, Risk-based
Time ExitsMax hold duration, Close at specific time
Trading WindowMarket hours only

Professional Statistics Output

Every backtest produces institutional-grade metrics:

  • Risk-adjusted returns: Sharpe, Sortino, Calmar ratios
  • Drawdown analysis: Max DD, duration, recovery time
  • Trade breakdown: By direction, exit reason, time period
  • Visualization data: Equity curve, drawdown curve, trade markers

🤖 AI Optimization: The Game Changer

This is where ApudFlow truly shines. Traditional optimization requires you to:

  1. Guess reasonable parameter ranges
  2. Set up grid search manually
  3. Analyze hundreds of results
  4. Hope you didn't overfit

ApudFlow's AI does all of this automatically:

How AI Optimization Works

  1. Volatility Analysis

    • Measures average bar price change
    • Detects timeframe (tick/intraday/daily)
    • Identifies your data's characteristics
  2. Smart Range Generation

    • Stop loss: 0.5x to 3x volatility
    • Take profit: 1x to 5x volatility
    • Position size: 5% to 25% of capital
    • Trailing stops: Based on timeframe
  3. Recursive Search

    • If best result is unprofitable, AI expands search
    • Up to 3 additional passes with wider ranges
    • Automatically finds better solutions
  4. Complete Output

    • Best parameters ready to copy
    • Top 10 alternatives to compare
    • Full trade list for chart visualization
    • Recommendations for improvement

Using AI Optimization

Simply check the "🤖 AI Find Best Strategy" checkbox and select your optimization target. That's all - every other parameter is hidden because AI determines them automatically.

Best optimization targets:

TargetWhen to Use
sharpe_ratio(Recommended) Best risk-adjusted returns
total_returnMaximum profit (higher risk)
profit_factorConsistent profit ratio
sortino_ratioFocus on downside risk only

Building a Complete Trading System

Here's how the pieces fit together in a real workflow:

Workflow Architecture

[Trigger] → [Data Source] → [Indicators] → [Signal Generator] → [Backtest Strategy]

[Telegram Notify] ← [Deploy to Live]

Step 1: Fetch Market Data

Connect your preferred data source:

  • Stock/Forex APIs: Stocks, forex, crypto, ETFs
  • Equity Data Providers: US equities with tick data
  • Crypto Exchanges: Cryptocurrency markets

Step 2: Add Technical Indicators

Use Python Code worker or built-in indicators from your data provider:

  • RSI, MACD, Bollinger Bands
  • Moving averages (SMA, EMA)
  • ATR for volatility

Step 3: Generate Signals

Configure Signal Generator with your entry/exit conditions:

Bullish Engulfing Pattern:

{
"long_conditions": [
{"left": "close - open", "operator": ">", "right": "0"},
{"left": "close[-1] - open[-1]", "operator": "<", "right": "0"},
{"left": "close - open", "operator": ">", "right": "open[-1] - close[-1]"}
],
"long_logic": "AND"
}

3% Price Spike with Volume:

{
"long_conditions": [
{"left": "pct_change(close)", "operator": ">=", "right": "3"},
{"left": "volume", "operator": ">", "right": "volume[-1]"}
],
"long_logic": "AND"
}

Step 4: Backtest with AI

Enable AI optimization to find optimal:

  • Stop loss distance
  • Take profit target
  • Position sizing
  • Trailing stop configuration

Step 5: Analyze and Deploy

Review the AI's recommendations:

  • Check top 10 parameter combinations
  • Examine trades on chart
  • Validate with block analysis
  • Deploy winners to live trading

Real-World Strategy Examples

Momentum Breakout Strategy

Signal Generator:

{
"long_conditions": [
{"left": "close", "operator": ">", "right": "high[-1]"},
{"left": "volume", "operator": ">", "right": "volume_sma * 1.5"}
],
"long_logic": "AND",
"close_mode": "none"
}

Backtest Configuration:

  • Enable AI optimization
  • Target: sharpe_ratio
  • Let AI determine SL/TP

Why close_mode: none? This tells Signal Generator to never generate close signals - the Backtest Strategy handles all exits via stop loss, take profit, and trailing stops. This is the professional approach for momentum strategies.

Mean Reversion with Bollinger Bands

Signal Generator:

{
"long_conditions": [
{"left": "close", "operator": "<=", "right": "bb_lower"}
],
"close_long_conditions": [
{"left": "close", "operator": ">=", "right": "bb_middle"}
],
"close_mode": "conditions"
}

Backtest Configuration:

  • AI optimization with profit_factor target
  • SL/TP type: percent
  • Block analysis: 6 blocks for validation

Multi-Timeframe Trend Following

Signal Generator:

{
"long_conditions": [
{"left": "close", "operator": ">", "right": "sma_20"},
{"left": "close", "operator": ">", "right": "sma_200"},
{"left": "adx", "operator": ">", "right": "25"}
],
"long_logic": "AND",
"signal_mode": "first"
}

Why signal_mode: first? This generates a signal only when conditions BECOME true, preventing duplicate signals on every bar the condition remains true.


AI Output: Understanding Your Results

When AI optimization completes, you get:

Best Parameters (Ready to Copy!)

{
"stop_loss_value": 1.8,
"take_profit_value": 4.5,
"position_size": 0.15,
"trailing_stop": true,
"trailing_stop_value": 1.2,
"rr_ratio": 2.5
}

Performance Metrics

{
"total_return_pct": 47.3,
"sharpe_ratio": 1.85,
"max_drawdown_pct": 12.4,
"win_rate": 58.2,
"profit_factor": 2.1,
"total_trades": 156
}

Recommendations

The AI provides actionable insights:

  • "Strategy shows strong risk-adjusted returns (Sharpe > 1.5)"
  • "Win rate is solid with good profit factor"
  • "Consider tighter trailing stop for momentum capture"

Trade Details for Charting

Each trade includes all data needed for visualization:

  • Entry/exit timestamps
  • Entry/exit prices
  • Stop loss and take profit levels
  • Position size
  • Profit/loss
  • Exit reason

Walk-Forward Validation

Don't trust a strategy that only works in hindsight! Use block analysis to validate robustness:

analysis_blocks: 6

This splits your data into 6 equal periods and tests the strategy on each one independently.

Consistency Score Interpretation

ScoreMeaning
80-100 ⭐Excellent - reliable across all periods
60-80 ✅Good - minor variations, generally reliable
40-60 ⚠️Moderate - review needed, possible overfit
20-40 ❌Poor - likely overfitted to specific periods
0-20 🚫Very Poor - strategy fails in multiple periods

A strategy that scores 80+ across 6 blocks is far more likely to perform in live trading than one that shows great overall results but inconsistent block performance.


Integration with Live Trading

ApudFlow's greatest strength is the seamless path from backtest to live:

Direct Broker Integration

  • Crypto Exchanges: Spot and futures trading
  • Traditional Brokers: Multi-asset trading
  • More integrations: Expanding broker support

Alert and Notification Pipeline

[Signal Generator] → [Condition Check] → [Messaging App]
→ [Chat Notification]
→ [Email Alert]
→ [Execute Trade]

### Schedule and Automation
- Run strategies on schedule (1min, 5min, hourly)
- 24/7 monitoring without manual intervention
- Automatic position management

---

## Getting Started: 5-Minute Quick Start

1. **Create new workflow** in ApudFlow
2. **Add data source** (any supported market data provider)
3. **Add Signal Generator** with simple RSI conditions:
```json
{
"long_conditions": [{"left": "rsi", "operator": "<", "right": "30"}],
"short_conditions": [{"left": "rsi", "operator": ">", "right": "70"}]
}
  1. Add Backtest Strategy and enable AI optimization
  2. Run and analyze - AI finds optimal parameters automatically!

Summary: Why Signal Generator + Backtest Strategy?

BenefitImpact
No coding10x faster strategy development
AI optimizationFind parameters you'd never guess
License-safeDeploy commercially without worries
Walk-forward validationTrust your results
Direct executionBacktest → Live in one platform
Professional statsInstitutional-grade analytics

Whether you're a discretionary trader looking to validate your ideas, a quant developer seeking rapid prototyping, or a fund manager requiring robust validation - ApudFlow's Signal Generator and Backtest Strategy provide the complete toolkit.


Ready to build your first strategy? Start with a simple RSI strategy, let AI optimize it, and experience the difference of professional-grade backtesting without the complexity.

Questions? Our community is here to help you develop winning strategies! 📈🚀

Fetch RSS Connector - Direct Access to News Feeds

· 5 min read
ApudFlow OS
Platform Updates

Access to real-time news is essential for staying informed about market movements and global events. Introducing the Fetch RSS Connector - a lightweight and powerful worker that provides direct access to RSS feeds from major news sources without requiring API keys or complex authentication.

What is Fetch RSS Connector?

The Fetch RSS Connector allows you to pull news articles directly from RSS feeds of trusted publications like Bloomberg, Investing.com, Yahoo Finance, and The New York Times. Unlike API-based news services, RSS feeds are freely accessible and provide real-time updates as soon as articles are published.

With built-in sorting by publication date and support for multiple feeds in a single request, you can quickly aggregate news from various sources for comprehensive market monitoring.

Key Features

  • No API Key Required: Direct access to public RSS feeds without authentication
  • Multiple Sources: Aggregate articles from multiple RSS feeds simultaneously
  • Real-time Updates: Get the latest articles as soon as they're published
  • Pre-configured Defaults: Ready-to-use feed URLs for major financial news sources
  • Date Sorting: Automatic chronological sorting with newest articles first
  • Flexible Limits: Control the number of articles returned (up to 500)
  • Structured Output: Consistent article format with title, body, date, source, and URL

How It Works

Core Functionality

RSS Parsing: Fetches and parses XML feeds from specified URLs Multi-source Aggregation: Combines articles from multiple feeds into one stream Date Extraction: Parses publication dates from various RSS formats (RFC 2822, ISO) Chronological Sorting: Orders articles by date, newest first Content Extraction: Pulls title, summary/description, source, and link

Default News Sources

The worker comes pre-configured with high-quality financial news feeds:

  • Investing.com - Stock market and forex news
  • Bloomberg Markets - Global financial markets coverage
  • Yahoo Finance - Comprehensive financial news
  • New York Times Economy - Economic analysis and reporting

Getting Started - Interface Configuration

Basic Setup

  1. Select Worker Type: Choose "fetch_rss" from the worker selection dropdown
  2. Review Default URLs: The worker includes pre-configured feeds for major sources
  3. Customize If Needed: Add or remove RSS feed URLs as desired
  4. Set Limit: Configure maximum number of articles (default: 100)
  5. Execute: Run the worker to retrieve articles from all feeds

Parameter Configuration

rss_urls (array): List of RSS feed URLs to fetch

[
"https://www.investing.com/rss/news_1063.rss",
"https://feeds.bloomberg.com/markets/news.rss",
"https://news.yahoo.com/rss/finance"
]

limit (number): Maximum articles to return (1-500, default: 100)

Practical Implementation Examples

Financial News Aggregation

Combine Fetch RSS with VectorAnalyzer for intelligent news filtering:

Workflow:

  1. Schedule Trigger - Run every 15 minutes
  2. Fetch RSS - Collect from default financial feeds
  3. VectorAnalyzer - Filter by query "market volatility earnings"
  4. Table Widget - Display relevant articles

Multi-Source News Dashboard

Build a comprehensive news monitoring system:

Configuration Example:

{
"rss_urls": [
"https://www.investing.com/rss/news_1063.rss",
"https://feeds.bloomberg.com/markets/news.rss",
"https://news.yahoo.com/rss/finance",
"https://rss.nytimes.com/services/xml/rss/nyt/Economy.xml",
"https://feeds.reuters.com/reuters/businessNews"
],
"limit": 200
}

Sentiment Analysis Pipeline

Combine with VectorAnalyzer for sentiment-aware news monitoring:

Workflow Chain:

  1. Fetch RSS → Collect articles from multiple sources
  2. VectorAnalyzer → Apply semantic search with sentiment analysis
  3. Filter by Polarity → Focus on strongly positive or negative news
  4. Telegram Notify → Alert on significant sentiment shifts

Integration with VectorAnalyzer

The Fetch RSS worker pairs perfectly with VectorAnalyzer for AI-powered news analysis:

Example Configuration:

// Step 1: Fetch RSS
{
"rss_urls": ["https://feeds.bloomberg.com/markets/news.rss"],
"limit": 100
}

// Step 2: VectorAnalyzer
{
"data": "{{workers[0].[result]}}",
"query": "Federal Reserve interest rates monetary policy",
"fields": ["title", "body"],
"top_percentage": 30,
"sort_by": "relevance"
}

Result: Articles semantically related to Federal Reserve policy, ranked by relevance with sentiment scores.

SourceRSS URLFocus
Investing.com Stockshttps://www.investing.com/rss/news_1063.rssStock market
Investing.com Forexhttps://www.investing.com/rss/news_14.rssCurrency markets
Bloomberg Marketshttps://feeds.bloomberg.com/markets/news.rssGlobal markets
Yahoo Financehttps://news.yahoo.com/rss/financeGeneral finance
NYT Economyhttps://rss.nytimes.com/services/xml/rss/nyt/Economy.xmlEconomic analysis
Reuters Businesshttps://feeds.reuters.com/reuters/businessNewsBusiness news

Best Practices

Feed Selection

  • Choose feeds that match your analysis focus
  • Mix broad coverage (Yahoo, Reuters) with specialized sources (Investing.com sectors)
  • Test feed availability periodically as URLs can change

Performance Optimization

  • Use appropriate limits based on your processing needs
  • Schedule updates based on your real-time requirements
  • Consider combining with caching for frequently accessed data

Workflow Design

  • Chain with VectorAnalyzer for intelligent filtering
  • Use sentiment_polarity for market mood analysis
  • Implement alerts for breaking news patterns

Conclusion

The Fetch RSS Connector provides a simple yet powerful way to access real-time news from trusted sources without the complexity of API authentication. Combined with VectorAnalyzer's semantic search and sentiment analysis capabilities, you can build sophisticated news monitoring workflows that surface the most relevant market intelligence.

Whether you're tracking market sentiment, monitoring specific sectors, or building comprehensive news dashboards, the Fetch RSS worker offers the flexibility and reliability you need. Start with the default feeds, then customize your sources to match your specific monitoring requirements.

For detailed guides on combining Fetch RSS with semantic analysis, check out our dedicated articles covering AI-powered news workflows and real-time sentiment monitoring strategies.

Fetch NewsAPI Connector - Comprehensive News Data Access

· 7 min read
ApudFlow OS
Platform Updates

In today's information-driven world, access to reliable and comprehensive news data is crucial for informed decision-making across various domains. Introducing the Fetch NewsAPI Connector - a powerful worker that provides seamless access to one of the most comprehensive news databases available through EventRegistry's NewsAPI.

What is Fetch NewsAPI Connector?

The Fetch NewsAPI Connector integrates with EventRegistry's NewsAPI to provide access to millions of news articles from thousands of trusted sources worldwide. Whether you're building news aggregators, conducting sentiment analysis, performing market research, or developing AI-powered content systems, this connector offers everything you need in one unified interface.

With advanced filtering capabilities by categories, sources, languages, and date ranges, the Fetch NewsAPI connector eliminates the need for multiple news APIs and complex data aggregation pipelines.

Key Features

  • Comprehensive Coverage: Access to millions of articles from 100,000+ news sources worldwide
  • Advanced Filtering: Filter by categories, sources, languages, and date ranges
  • Real-time & Historical Data: Access both breaking news and extensive historical archives
  • Trusted Sources: Pre-configured with major financial and news publications
  • Flexible Categories: Support for DMOZ taxonomy with business, finance, and industry categories
  • Language Support: Multi-language news retrieval with English as default
  • Date Range Filtering: Precise temporal filtering for time-sensitive analysis
  • Rate Limiting: Built-in API key management and usage tracking
  • Structured Output: Consistent article data with titles, content, dates, and metadata

How It Works

Core Functionality

Article Retrieval: Fetches news articles using complex queries with category and source filtering Temporal Filtering: Supports date range queries for historical and current news analysis Source Validation: Ensures articles come from reputable news sources Language Filtering: Retrieves articles in specified languages Result Limiting: Configurable result limits with intelligent ranking

Data Processing Pipeline

  1. Query Construction: Builds complex EventRegistry queries with category and source filters
  2. API Communication: Secure connection to EventRegistry NewsAPI
  3. Article Filtering: Applies date ranges, language filters, and result limits
  4. Data Normalization: Standardizes article format across all sources
  5. Result Packaging: Returns structured data ready for downstream processing

Getting Started - Interface Configuration

Basic Setup

  1. Select Worker Type: Choose "fetch_newsapi" from the worker selection dropdown
  2. Configure API Key: Enter your EventRegistry API key or use environment variable
  3. Set Parameters: Configure categories, sources, date ranges, and limits
  4. Execute: Run the worker to retrieve filtered news articles

Common Parameters

  • apiKey: Your EventRegistry API key (optional if set as environment variable)
  • categories: Array of category URIs for topic filtering
  • sources: Array of news source URIs for publication filtering
  • lang: Language code for article filtering (default: "eng")
  • dateFrom/dateTo: ISO date range for temporal filtering
  • limit: Maximum number of articles to return (1-500)

Parameter Configuration Examples

Default Financial News Setup:

  • categories: ["dmoz/Business/Investing/Stocks_and_Bonds", "dmoz/Society/Government/Finance"]
  • sources: ["finance.yahoo.com", "bloomberg.com", "reuters.com", "cnbc.com"]
  • limit: 100

Breaking News Monitoring:

  • dateFrom: Current date (YYYY-MM-DD)
  • categories: Broad business categories
  • limit: 50

Advanced Configuration Options

Category Taxonomy

EventRegistry uses DMOZ taxonomy for categorization. Common financial categories include:

  • dmoz/Business/Investing/Stocks_and_Bonds - Stock market and investment news
  • dmoz/Society/Government/Finance - Government financial policy and regulation
  • dmoz/Business/Investing/Currencies - Forex and currency markets
  • dmoz/Business/Investing/Commodities - Commodity trading and markets

Source Selection Strategy

Major Financial Publications:

  • finance.yahoo.com - Yahoo Finance
  • bloomberg.com - Bloomberg News
  • reuters.com - Reuters
  • cnbc.com - CNBC
  • marketwatch.com - MarketWatch
  • wsj.com - Wall Street Journal

Specialized Sources:

  • investing.com - Investing.com
  • foxbusiness.com - Fox Business
  • nytimes.com - New York Times (business section)

Date Range Optimization

  • Real-time News: Use current date for dateFrom, omit dateTo
  • Historical Analysis: Set specific date ranges for backtesting
  • Trend Analysis: Use rolling date windows for ongoing monitoring

Practical Implementation Examples

Financial News Aggregation System

Create a comprehensive financial news monitoring workflow:

  1. Fetch Breaking News using default financial categories and sources
  2. Apply Vector Analysis using VectorAnalyzer for semantic search
  3. Perform Sentiment Analysis to gauge market sentiment
  4. Generate Alerts based on sentiment scores and relevance

Complete Workflow Example:

  • workers[0]: Fetch NewsAPI Connector
    • type: fetch_newsapi
    • categories: ["dmoz/Business/Investing/Stocks_and_Bonds"]
    • sources: ["bloomberg.com", "reuters.com", "cnbc.com"]
    • dateFrom: 2025-11-01
    • limit: 50
  • workers[1]: VectorAnalyzer
    • type: vector_analyzer
    • data: {{workers[0].result.results}}
    • query: "market volatility and economic indicators"
    • top_percentage: 30

Market Intelligence Dashboard

Build real-time market intelligence:

  • workers[0]: Multiple News Sources
    • type: fetch_newsapi
    • sources: ["finance.yahoo.com", "marketwatch.com", "foxbusiness.com"]
    • dateFrom: Current date
  • workers[1]: Category Analysis
    • type: fetch_newsapi
    • categories: ["dmoz/Business/Investing/Currencies", "dmoz/Business/Investing/Commodities"]
  • workers[2]: Sentiment Aggregation
    • type: sentiment_analyzer
    • data_source: {{workers[0].result.results}} + {{workers[1].result.results}}

Historical News Analysis

Analyze news patterns over time:

  • workers[0]: Historical Data Collection
    • type: fetch_newsapi
    • dateFrom: 2025-01-01
    • dateTo: 2025-11-15
    • categories: Broad business categories
    • limit: 500
  • workers[1]: Trend Analysis
    • type: time_series_analyzer
    • data: {{workers[0].result.results}}
    • group_by: month

Operations Comparison Table

Feature CategoryUse CaseKey ParametersOutput Format
Breaking NewsReal-time monitoringdateFrom=current, limit=50Recent articles array
Historical AnalysisTrend researchdateFrom/dateTo rangesTime-filtered articles
Category FocusTopic-specific newscategories arrayThemed article collection
Source FilteringPublication qualitysources arrayTrusted source articles
Multi-languageGlobal coveragelang parameterLocalized content
Volume ControlPerformance optimizationlimit parameterSized result sets

Best Practices and Tips

API Key Management

  • Use personal API keys for higher rate limits and dedicated quotas
  • Set EVENT_REGISTRY_API_KEY environment variable for security
  • Monitor usage through EventRegistry dashboard to avoid limits

Query Optimization

  • Use specific category URIs for targeted results rather than broad searches
  • Combine category and source filtering for precise content control
  • Limit results appropriately for your processing capacity

Date Range Strategies

  • Use ISO format (YYYY-MM-DD) for all date parameters
  • Omit dateTo for ongoing "breaking news" monitoring
  • Use dateFrom only for "from this date forward" queries

Performance Considerations

  • Lower limits (50-100) for real-time applications
  • Higher limits (200-500) for batch processing and analysis
  • Consider pagination for very large result sets

Integration with Other Workers

AI-Powered News Analysis

Combine with VectorAnalyzer for semantic search:

  • VectorAnalyzer: Find articles most relevant to your query
  • Sentiment Analysis: Understand market sentiment from news content
  • Topic Modeling: Discover trending themes across news corpus

Financial Data Correlation

Link news with market data:

  • Price Data Integration: Correlate news events with price movements
  • Economic Calendar: Connect news with economic indicators
  • Technical Analysis: Use news sentiment as additional signals

Content Processing Pipelines

Build automated content workflows:

  • Text Summarization: Generate article summaries for quick review
  • Entity Extraction: Identify companies, people, and organizations
  • Trend Detection: Monitor topic frequency and sentiment changes

Conclusion

The Fetch NewsAPI Connector represents a comprehensive solution for news data access, providing everything from breaking financial news to historical archives in a single, easy-to-use interface. Whether you're a financial analyst, data scientist, content creator, or developer building news applications, this connector offers the flexibility and reliability you need to build sophisticated news-driven workflows.

With support for global news sources, advanced filtering capabilities, and seamless integration with AI analysis tools like VectorAnalyzer, the Fetch NewsAPI connector eliminates news data silos and simplifies the development of intelligent news processing systems. Start exploring the power of comprehensive news data today and unlock new possibilities for your news analysis and content processing applications.

For detailed guides on specific use cases, check out our dedicated articles covering advanced filtering techniques, AI-powered news analysis workflows, and real-time news monitoring strategies with step-by-step interface instructions and practical examples.

VectorAnalyzer Connector - Advanced AI-Powered Semantic Search Engine

· 6 min read
ApudFlow OS
Platform Updates

In today's data-rich environment, finding relevant information among vast collections of text requires more than simple keyword matching. Introducing the VectorAnalyzer Connector - an advanced AI-powered semantic search engine that understands meaning, context, and sentiment to deliver intelligent content analysis and ranking.

What is VectorAnalyzer Connector?

The VectorAnalyzer Connector transforms traditional text search by using cutting-edge AI technology to understand semantic meaning rather than just matching keywords. Whether you're analyzing news articles, research documents, customer feedback, or any text collection, this connector provides intelligent similarity scoring, sentiment analysis, and dynamic filtering to surface the most relevant content.

Built on state-of-the-art language models and vector databases, the VectorAnalyzer delivers enterprise-grade semantic search capabilities with real-time processing and flexible ranking options.

Key Features

  • Semantic Understanding: AI-powered vector embeddings capture meaning, not just keywords
  • Intelligent Ranking: Cosine similarity scoring with dynamic threshold filtering
  • Sentiment Analysis: Built-in positive/negative sentiment detection for each result
  • Flexible Sorting: Sort by relevance, similarity score, or publication date
  • Batch Processing: Optimized for high-performance processing of large document collections
  • Dynamic Thresholds: Automatic result filtering based on data distribution and quality
  • Multi-Format Support: Handles various text field names and document structures
  • Performance Optimized: CPU-optimized with caching and batch processing
  • Configurable Quality: Adjustable top_percentage for precision vs recall control

How It Works

Core AI Pipeline

Vector Embedding Generation: Converts text into mathematical vectors using advanced transformer models Similarity Calculation: Uses cosine similarity to measure semantic relatedness (0.0-1.0 scale) Dynamic Filtering: Automatically determines quality thresholds based on result distribution Sentiment Classification: Analyzes emotional tone using fine-tuned language models Intelligent Ranking: Combines similarity scores with optional date-based sorting

Processing Architecture

  1. Text Extraction: Dynamically identifies and concatenates text fields from documents
  2. Batch Encoding: Processes multiple texts simultaneously for optimal performance
  3. Vector Search: Efficient similarity search using FAISS vector database
  4. Quality Filtering: Applies dynamic thresholds to ensure result relevance
  5. Sentiment Analysis: Classifies emotional tone of top results
  6. Result Ranking: Sorts and formats results based on user preferences

Getting Started - Interface Configuration

Basic Setup

  1. Select Worker Type: Choose "vector_analyzer" from the worker selection dropdown
  2. Prepare Data: Provide article collection as JSON string or array
  3. Set Search Query: Enter natural language search terms
  4. Configure Parameters: Adjust quality settings and sorting preferences
  5. Execute: Run semantic search with AI-powered analysis

Common Parameters

  • data: JSON string or array of documents with text content
  • query: Natural language search query for semantic matching
  • top_percentage: Percentage of top results to return (1-100%)
  • sort_by: Sort method (relevance/similarity/date)
  • skip_sentiment: Skip sentiment analysis for faster processing

Parameter Configuration Examples

High-Precision Search:

  • query: "artificial intelligence in healthcare"
  • top_percentage: 25
  • sort_by: "relevance"

Recent Content Analysis:

  • query: "market volatility trends"
  • top_percentage: 50
  • sort_by: "date"

Fast Processing Mode:

  • query: "breaking news"
  • skip_sentiment: true
  • top_percentage: 30

Advanced Configuration Options

Vector Embedding Models

SentenceTransformers: Uses all-MiniLM-L6-v2 for optimal balance of speed and accuracy

  • 384-dimensional embeddings
  • Optimized for semantic similarity
  • CPU-efficient processing

Similarity Algorithms

Cosine Similarity: Measures angle between vectors for semantic relatedness

  • Scale: 0.0 (completely dissimilar) to 1.0 (identical meaning)
  • L2 normalization for consistent scoring
  • FAISS-accelerated computation

Dynamic Threshold Calculation

  • Analyzes similarity score distribution
  • Automatically sets cutoff based on top_percentage
  • Ensures consistent result quality across different datasets

Sentiment Analysis Pipeline

DistilBERT Model: Fine-tuned for sentiment classification

  • Binary classification (positive/negative)
  • Confidence scoring (0.0-1.0)
  • Batch processing for efficiency

Practical Implementation Examples

News Analysis and Monitoring

Create intelligent news monitoring with semantic understanding:

  1. Fetch News Articles using Fetch NewsAPI connector
  2. Apply Vector Search for topic-specific content discovery
  3. Analyze Sentiment to gauge market mood and public perception
  4. Generate Alerts based on relevance scores and sentiment trends

Complete Workflow Example:

  • workers[0]: Fetch NewsAPI Connector
    • type: fetch_newsapi
    • categories: ["dmoz/Business/Investing/Stocks_and_Bonds"]
    • limit: 200
  • workers[1]: VectorAnalyzer
    • type: vector_analyzer
    • data: {{workers[0].result.results}}
    • query: "cryptocurrency market trends and adoption"
    • top_percentage: 35
    • sort_by: date

Content Recommendation System

Build personalized content discovery:

  • workers[0]: Content Database Query
    • type: database_query
    • collection: articles
    • limit: 1000
  • workers[1]: Semantic Search
    • type: vector_analyzer
    • data: {{workers[0].result.documents}}
    • query: "machine learning applications"
    • top_percentage: 20
  • workers[2]: Recommendation Engine
    • type: content_recommender
    • articles: {{workers[1].result.results}}

Sentiment-Based Market Intelligence

Analyze market sentiment from news and social media:

  • workers[0]: Multi-Source Data Collection
    • type: fetch_newsapi
    • sources: ["bloomberg.com", "reuters.com", "cnbc.com"]
  • workers[1]: VectorAnalyzer with Sentiment
    • type: vector_analyzer
    • data: {{workers[0].result.results}}
    • query: "economic indicators and growth"
    • top_percentage: 40
    • sort_by: relevance
  • workers[2]: Sentiment Dashboard
    • type: sentiment_aggregator
    • data: {{workers[1].result.results}}

Operations Comparison Table

Feature CategoryUse CaseKey ParametersOutput Characteristics
Semantic SearchContent discoveryquery, top_percentageSimilarity scores 0.0-1.0
Sentiment AnalysisEmotional tone detectionskip_sentiment=falsepositive/negative classification
Quality FilteringPrecision controltop_percentageDynamic threshold application
Temporal SortingTime-based rankingsort_by="date"Chronological ordering
Performance ModeSpeed optimizationskip_sentiment=true2x faster processing

Best Practices and Tips

Query Optimization

  • Use natural language queries: "renewable energy investments" works better than "green stocks"
  • Include context terms: "artificial intelligence in healthcare applications"
  • Avoid single words when possible: "market volatility trends" vs "volatility"

Quality Control

  • Start with top_percentage=40 for balanced results
  • Use lower percentages (10-25) for high-precision tasks
  • Use higher percentages (60-100) for comprehensive analysis

Performance Tuning

  • Enable skip_sentiment for speed-critical applications
  • Limit input data size for real-time processing
  • Use batch processing for large document collections

Result Interpretation

  • Similarity scores > 0.7 indicate strong semantic matches
  • Scores 0.3-0.7 represent moderate relevance
  • Scores < 0.3 may indicate weak or tangential relationships

Integration with Other Workers

News Analysis Pipeline

Combine with news fetching for intelligent content processing:

  • Fetch NewsAPI: Source articles from trusted publications
  • VectorAnalyzer: Find semantically relevant content
  • Sentiment Analysis: Understand emotional context
  • Trend Detection: Identify emerging topics and patterns

Content Management Systems

Enhance search capabilities in content platforms:

  • Database Query: Retrieve content from CMS
  • VectorAnalyzer: Power semantic search features
  • Recommendation Engine: Suggest related content
  • Analytics Dashboard: Track search effectiveness

Research and Intelligence

Build research automation workflows:

  • Document Processing: Extract text from research papers
  • VectorAnalyzer: Find related studies and references
  • Citation Analysis: Identify key sources and authors
  • Knowledge Graph: Build interconnected research networks

Conclusion

The VectorAnalyzer Connector represents a quantum leap in text search and analysis capabilities, moving beyond traditional keyword matching to true semantic understanding. Whether you're building news monitoring systems, content recommendation engines, research platforms, or intelligence analysis tools, this connector provides the AI-powered foundation you need for next-generation text analysis.

With support for advanced vector embeddings, intelligent similarity scoring, sentiment analysis, and flexible ranking options, the VectorAnalyzer eliminates the limitations of traditional search while delivering enterprise-grade performance and accuracy. Start exploring the power of semantic search today and unlock new possibilities for understanding and analyzing text data at scale.

For detailed guides on specific use cases, check out our dedicated articles covering advanced semantic search techniques, sentiment analysis workflows, and AI-powered content processing pipelines with step-by-step interface instructions and practical examples.

Twelve Data Market Data Connector - Comprehensive Financial Data Access

· 7 min read
ApudFlow OS
Platform Updates

In today's fast-paced financial markets, access to reliable and comprehensive market data is essential for informed trading and investment decisions. Introducing the Twelve Data Market Data Connector - a powerful worker that provides seamless access to one of the most comprehensive financial data APIs available.

What is Twelve Data Market Data Connector?

The Twelve Data Market Data Connector integrates with the Twelve Data API to provide access to real-time and historical financial data across multiple asset classes including stocks, forex, cryptocurrencies, ETFs, and mutual funds. Whether you're building trading strategies, conducting fundamental analysis, or developing financial applications, this connector offers everything you need in one unified interface.

With support for over 40 different operations ranging from historical price data to technical indicators and fundamental company information, the Twelve Data connector eliminates the need for multiple data sources and complex API integrations.

Key Features

  • 40+ Operations: From basic price data to advanced technical indicators and fundamental analysis
  • Multi-Asset Support: Stocks, forex, crypto, ETFs, mutual funds, and more
  • Real-time & Historical Data: Access both live market data and extensive historical datasets
  • Global Coverage: Data from exchanges worldwide with automatic timezone handling
  • Flexible Parameters: Customizable intervals, date ranges, and output sizes
  • Rate Limiting: Built-in per-user rate limiting when using shared API keys
  • Error Handling: Robust error handling with clear error messages
  • API Key Management: Support for personal API keys or shared environment keys

Core Operations Overview

1. Market Data Operations - The Foundation

time_series: Historical and intraday OHLCV data for any symbol quote: Real-time quote snapshots with full market information price: Simple real-time price data for quick checks

2. Technical Analysis Operations

indicator: 20+ technical indicators including SMA, EMA, RSI, MACD, Bollinger Bands

3. Discovery Operations

symbol_search: Find symbols by name, exchange, or country exchanges: List all available exchanges and markets

4. Fundamental Operations

profile: Company overview and basic information fundamentals: Detailed financial statements and metrics dividends, splits, earnings: Historical corporate actions logo: Company logo URLs for branding

5. Financial Statements

income_statement: Revenue, expenses, and profitability data balance_sheet: Assets, liabilities, and equity information cash_flow: Operating, investing, and financing cash flows

6. ETFs & Mutual Funds

etfs_list, etfs_world: ETF discovery and global listings mutual_funds_list, mutual_funds_world: Mutual fund data and analysis

7. Calendar & Events

dividends_calendar, earnings_calendar: Upcoming corporate events ipo_calendar: New public offerings and listings

8. Analyst & Market Data

earnings_estimate, revenue_estimate: Analyst forecasts recommendations, price_target: Analyst opinions and targets analyst_ratings: Comprehensive analyst coverage

Getting Started - Interface Configuration

Basic Setup

  1. Select Worker Type: Choose "twelve_data" from the worker selection dropdown
  2. Choose Operation: Select your desired operation from the operation dropdown
  3. API Key (Optional): Enter your personal Twelve Data API key, or leave blank to use the shared key
  4. Configure Parameters: Fill in operation-specific parameters based on your needs

Common Parameters

  • api_key: Your Twelve Data API key (optional if shared key is configured)
  • operation: The specific operation you want to perform
  • symbol: Stock ticker, forex pair, or crypto symbol (required for most operations)

Operation-Specific Parameters

Time Series Configuration:

  • interval: Data interval (1min, 5min, 15min, 30min, 45min, 1h, 2h, 4h, 1day, 1week, 1month)
  • outputsize: Number of data points to return (max 5000)
  • start_date/end_date: Date range for historical data
  • timezone: Target timezone for timestamps

Technical Indicators:

  • indicator: Indicator type (sma, ema, rsi, macd, bbands, etc.)
  • time_period: Lookback period for calculation
  • series_type: Price series to use (open, high, low, close)

Search & Discovery:

  • symbol: Search query for symbol lookup
  • exchange: Filter by specific exchange
  • country: Filter by country

Advanced Configuration Options

Rate Limiting Awareness

When using the shared API key, the system enforces per-user rate limits (5 calls per minute). Make sure your userId is properly configured in the context for accurate limiting.

Data Formatting

All timestamp data is automatically formatted in ISO format for consistency. You can customize this behavior through additional parameters if needed.

Multi-Symbol Support

Operations like time_series support comma-separated symbol lists for batch requests, allowing you to fetch data for multiple assets in a single call.

Practical Implementation Examples

Intraday Trading Strategy

Create a complete intraday analysis workflow:

  1. Fetch Real-time Quotes using the quote operation
  2. Calculate Technical Indicators using the indicator operation
  3. Monitor Multiple Symbols with batch time_series requests
  4. Set Alerts based on price levels and indicator signals

Complete Workflow Example:

  • workers[0]: Twelve Data Connector
    • type: twelve_data
    • operation: time_series
    • symbol: AAPL,MSFT,GOOGL
    • interval: 5min
    • outputsize: 100
  • workers[1]: Technical Analysis
    • type: technical_analysis
    • indicators: rsi,macd
    • data_source: workers[0].result

Fundamental Analysis System

Build comprehensive company analysis:

  • workers[0]: Company Profile
    • type: twelve_data
    • operation: profile
    • symbol: TSLA
  • workers[1]: Financial Statements
    • type: twelve_data
    • operation: income_statement
    • symbol: TSLA
    • period: annual
  • workers[2]: Analyst Estimates
    • type: twelve_data
    • operation: earnings_estimate
    • symbol: TSLA

ETF Portfolio Analysis

Analyze ETF performance and holdings:

  • workers[0]: ETF Discovery
    • type: twelve_data
    • operation: etfs_world
    • country: US
  • workers[1]: ETF Performance
    • type: twelve_data
    • operation: etfs_world_performance
    • currency: USD

Operations Comparison Table

Operation CategoryBest ForKey ParametersOutput Format
Market DataPrice analysis, chartingsymbol, interval, datesOHLCV arrays
Technical IndicatorsSignal generationindicator, symbol, periodIndicator values
FundamentalsCompany analysissymbolJSON objects
FinancialsRatio analysissymbol, periodFinancial statements
ETFs/Mutual FundsPortfolio constructionfilters, limitsAsset listings
CalendarsEvent timingdate rangesEvent arrays
Analyst DataInvestment decisionssymbolEstimates and ratings

Best Practices and Tips

API Key Management

  • Use personal API keys for higher rate limits and dedicated quotas
  • The shared key enforces per-user limits (5/min) to ensure fair usage
  • Monitor your API usage through the Twelve Data dashboard

Data Optimization

  • Use appropriate intervals for your analysis timeframe
  • Limit outputsize to reduce response times and API costs
  • Cache frequently accessed data to minimize API calls

Error Handling

  • Check for API key validity before complex workflows
  • Handle rate limit errors gracefully with retry logic
  • Validate symbol existence before requesting detailed data

Performance Considerations

  • Batch requests for multiple symbols when possible
  • Use historical data endpoints for backtesting
  • Implement caching layers for real-time applications

Integration with Other Workers

Trading Strategy Integration

Combine with technical analysis workers:

  • Support & Resistance Calculator: Use price data for level identification
  • Pattern Recognition: Feed historical data for pattern detection
  • Risk Management: Incorporate volatility data from indicators

Portfolio Management

Build comprehensive portfolios:

  • Asset Allocation: Use ETF and mutual fund data
  • Performance Tracking: Monitor returns with price data
  • Rebalancing: Trigger based on fundamental changes

Alert Systems

Create intelligent notifications:

  • Price Alerts: Monitor specific levels with real-time data
  • Technical Signals: Generate alerts from indicator crossovers
  • Event Notifications: Track earnings and dividend calendars

Conclusion

The Twelve Data Market Data Connector represents a comprehensive solution for financial data access, providing everything from basic price information to advanced fundamental analysis in a single, easy-to-use interface. Whether you're a retail trader, institutional investor, or financial application developer, this connector offers the flexibility and reliability you need to build sophisticated financial workflows.

With support for global markets, multiple asset classes, and extensive historical data, the Twelve Data connector eliminates data silos and simplifies the development of financial applications. Start exploring the power of comprehensive market data today and unlock new possibilities for your trading and investment strategies.

For detailed guides on specific operations, check out our dedicated articles covering each major operation category with step-by-step interface instructions and practical examples.

Support & Resistance Calculator - Advanced Technical Analysis for Trading

· 14 min read
ApudFlow OS
Platform Updates

In the dynamic world of financial markets, identifying key support and resistance levels is crucial for successful trading strategies. Introducing the Support & Resistance Calculator - a comprehensive technical analysis worker that provides multiple methods for calculating critical price levels that influence market behavior.

What is Support & Resistance Calculator?

The Support & Resistance Calculator leverages advanced technical analysis algorithms to identify key price levels where buying and selling pressure typically converge. Unlike simple moving averages or basic indicators, this worker combines multiple proven methodologies including pivot points, Fibonacci analysis, swing detection, and psychological price levels.

Whether you're a day trader looking for intraday levels, a swing trader identifying trend continuation points, or a position trader seeking major reversal zones, this calculator provides the analytical depth you need to make informed trading decisions.

Key Features

  • 10 Analysis Methods: From classic pivot points to advanced Fibonacci extensions
  • Multi-Timeframe Support: Works on any timeframe from 1-minute to monthly charts
  • Flexible Data Input: Accepts OHLC data from any source (APIs, databases, manual input)
  • Advanced Filtering: ATR-based proximity filtering and clustering algorithms
  • Real-time Context: Automatic nearest support/resistance detection and price positioning
  • Comprehensive Output: Detailed level breakdowns with clear naming conventions
  • Trading Integration: Perfect for automated trading strategies and risk management

Core Analysis Methods

1. Classic Pivot Points - The Foundation of Technical Analysis

Method: pivot_points Purpose: Calculate traditional pivot points based on previous period's high, low, and close prices. Pivot points are widely used by traders to identify key support and resistance levels for the upcoming trading session. These levels act as psychological barriers where buying and selling pressure tends to converge, making them excellent reference points for entry, exit, and stop-loss placement strategies.

Formula Overview:

  • Pivot (P): (High + Low + Close) / 3
  • Resistance 1 (R1): (2 × P) - Low
  • Support 1 (S1): (2 × P) - High
  • Resistance 2 (R2): P + (High - Low)
  • Support 2 (S2): P - (High - Low)

Example - Daily Pivot Points:

  • rowsExpr: vars.daily_ohlc_data (expression returning OHLC data)
  • open: open (column name for open prices)
  • high: high (column name for high prices)
  • low: low (column name for low prices)
  • close: close (column name for close prices)
  • methods: ["pivot_points"] (analysis methods to use)
  • lookback_period: 20 (period for analysis)

Output:

  • support_levels: Dictionary with support levels (sr_s1, sr_s2, sr_s3)
  • resistance_levels: Dictionary with resistance levels (sr_r1, sr_r2, sr_r3)
  • current_price: Current closing price for reference
  • nearest_support: Closest support level below current price
  • nearest_resistance: Closest resistance level above current price
  • price_position: Relative position between nearest S/R levels (0-1 scale)

2. Woodie Pivot Points - Enhanced Weighting

Method: pivot_woodie Purpose: Calculate Woodie pivot points, a modified version of traditional pivot points that gives more weight to the closing price. This method is particularly effective in trending markets where the closing price carries significant information about market sentiment and momentum. Woodie pivots provide more responsive levels that better reflect current market conditions compared to classic pivots.

Key Differences from Classic:

  • Pivot (P): (High + Low + 2 × Close) / 4
  • Better suited for trending markets
  • More responsive to recent price action

Example - Woodie Calculation:

  • methods: ["pivot_woodie"] (use Woodie pivot method)
  • rowsExpr: data.price_data (expression with price data)

3. Camarilla Pivot Points - Intraday Precision

Method: pivot_camarilla Purpose: Calculate Camarilla pivot points, an advanced pivot system specifically designed for intraday trading and scalping strategies. Unlike traditional pivots, Camarilla levels are calculated using a unique formula that creates tighter ranges around the current price, making them ideal for short-term traders who need precise entry and exit points within a single trading session.

Unique Features:

  • 8 Levels: S1-S4 support and R1-R4 resistance levels
  • Tighter Ranges: More precise levels for scalping and day trading
  • L3/L4, H3/H4: Extreme levels often act as major reversal points

Example - Camarilla Levels:

  • methods: ["pivot_camarilla"] (use Camarilla pivot method)
  • rowsExpr: vars.intraday_data (expression with intraday OHLC data)

Output:

  • support_levels: Camarilla support levels (camarilla_support_1 through camarilla_support_4)
  • resistance_levels: Camarilla resistance levels (camarilla_resistance_1 through camarilla_resistance_4)

4. Fibonacci Retracement - Golden Ratio Analysis

Method: fibonacci_retracement Purpose: Calculate Fibonacci retracement levels based on recent price swings using the mathematical golden ratio sequence. These levels help identify potential reversal points during price corrections within a larger trend. Fibonacci retracements are particularly powerful because they combine mathematical precision with market psychology, creating levels where traders naturally place orders.

Fibonacci Ratios Used:

  • 0.236 (23.6%): Shallow retracement, often weak support/resistance
  • 0.382 (38.2%): Common retracement level, moderate strength
  • 0.5 (50.0%): Psychological midpoint, strong level
  • 0.618 (61.8%): Golden ratio, very strong level
  • 0.786 (78.6%): Deep retracement, potential reversal zone

Example - Fibonacci Analysis:

  • methods: ["fibonacci_retracement"] (use Fibonacci retracement method)
  • lookback_period: 50 (period for swing analysis)
  • rowsExpr: data.historical_prices (expression with historical OHLC data)

Output:

  • support_levels: Fibonacci support levels (fib_0.786_support, fib_0.618_support, fib_0.5_support)
  • resistance_levels: Fibonacci resistance levels (fib_0.382_resistance, fib_0.236_resistance)

5. Fibonacci Extensions - Projection Targets

Method: fibonacci_extensions Purpose: Calculate Fibonacci extension levels that project potential price targets beyond the current swing range. These levels help traders identify where a trend might continue after breaking through previous highs or lows, providing objective profit targets and continuation pattern recognition. Extensions are essential for position traders who need to set realistic price objectives.

Extension Ratios:

  • 1.272: First extension target
  • 1.618: Golden ratio extension (primary target)
  • 2.0: 100% extension (secondary target)
  • 2.618: Extended target for strong trends

Example - Extension Targets:

  • methods: ["fibonacci_extensions"] (use Fibonacci extensions method)
  • fib_ext_ratios: "1.272,1.618,2.0,2.618" (comma-separated extension ratios)
  • rowsExpr: vars.swing_data (expression with swing data)

6. Technical Analysis Extrema - Advanced Swing Detection

Method: ta_extrema Purpose: Identify local maxima and minima (swing highs and lows) using advanced signal processing algorithms. This method automatically detects significant turning points in price action, creating support and resistance levels based on actual market behavior rather than mathematical formulas. It's particularly valuable for swing traders who want to focus on levels that have proven their significance through price action.

Algorithm Features:

  • Scipy Signal Processing: Uses argrelextrema for precise swing detection
  • Order Parameter: Controls sensitivity (higher = fewer, stronger levels)
  • ATR Clustering: Groups nearby levels into consolidated zones
  • Top N Selection: Returns most significant levels

Example - Advanced Swing Analysis:

  • methods: ["ta_extrema"] (use technical analysis extrema method)
  • order: 5 (extrema sensitivity parameter)
  • atr_mult: 1.5 (ATR multiplier for clustering)
  • top_n: 8 (maximum number of levels to return)
  • lookback_bars: 100 (number of bars to analyze)

7. Price Channels (Donchian) - Trend Channel Analysis

Method: price_channels Purpose: Calculate Donchian price channels (also known as trading ranges) that show the highest high and lowest low over a specified period. These channels help identify trending markets and potential breakout opportunities. When price consistently hugs one side of the channel, it indicates a strong trend, while breakouts from the channel can signal major trend changes or continuation moves.

Channel Components:

  • Upper Channel: Highest high over period
  • Lower Channel: Lowest low over period
  • Mid Channel: Midpoint for additional context

Example - Channel Analysis:

  • methods: ["price_channels"] (use price channels method)
  • channel_length: 20 (lookback period for channel calculation)
  • rowsExpr: data.price_series (expression with price data)

8. Psychological Price Levels - Round Number Analysis

Method: psychological_levels Purpose: Identify psychological price levels based on round numbers that act as significant psychological barriers in traders' minds. These levels (like 100, 1000, 5000, etc.) often cause hesitation or increased activity because they represent clean, easy-to-remember price points. Psychological levels can be more significant than technical levels because they influence the collective behavior of market participants.

Features:

  • Auto Step Detection: Automatically determines appropriate step size
  • Custom Steps: Manual step configuration for specific assets
  • Multi-Level: Generates multiple levels around current price

Example - Psychological Levels:

  • methods: ["psychological_levels"] (use psychological levels method)
  • psych_step: 0 (step size, 0 = auto-detect)
  • psych_count: 3 (number of levels per side)

Output:

  • support_levels: Psychological support levels (psych_0, psych_1, psych_2)
  • resistance_levels: Psychological resistance levels (psych_0, psych_1, psych_2)

Advanced Configuration Options

Proximity Filtering with ATR

Filter levels based on distance from current price using Average True Range:

  • max_distance_atr: 2.0 (maximum distance in ATR units)
  • strict_side: true (keep only supports below and resistances above current price)
  • methods: ["ta_extrema", "fibonacci_retracement"] (analysis methods)

Nearest Level Selection

Get only the most relevant support and resistance levels:

  • nearest_only: true (return only single nearest support and resistance)
  • methods: ["pivot_points"] (analysis methods)

Multi-Method Combination

Combine multiple analysis methods for comprehensive analysis:

  • methods: ["ta_extrema", "fibonacci_retracement", "pivot_points", "psychological_levels"] (multiple analysis methods)
  • lookback_period: 50 (analysis period)
  • order: 3 (extrema sensitivity)
  • atr_mult: 1.2 (ATR multiplier for clustering)

Practical Implementation Examples

Intraday Trading Strategy

Create a complete intraday trading workflow:

  1. Fetch Real-time Data using market data connectors
  2. Calculate Camarilla Pivots for precise intraday levels
  3. Identify Fibonacci Retracements for entry timing
  4. Set Stop Losses at nearest support levels
  5. Define Profit Targets using Fibonacci extensions

Complete Workflow Example:

  • workers[0]: Market data fetcher
    • type: market_data_fetcher
    • symbol: AAPL
    • timeframe: 5m
    • limit: 100
  • workers[1]: Support & Resistance Calculator
    • type: support_resistance
    • rowsExpr: workers[0].data
    • methods: ["pivot_camarilla", "fibonacci_retracement"]
    • lookback_period: 20
    • nearest_only: false

Swing Trading System

Build a swing trading strategy using multiple timeframe analysis:

  • workers[0]: Support & Resistance Calculator
    • type: support_resistance
    • rowsExpr: vars.daily_data
    • methods: ["ta_extrema", "price_channels"]
    • lookback_period: 50
    • order: 5
    • channel_length: 20

Risk Management Integration

Incorporate support/resistance levels into position sizing:

  • workers[0]: Support & Resistance Calculator
    • type: support_resistance
    • rowsExpr: vars.portfolio_data
    • methods: ["pivot_points", "psychological_levels"]
    • max_distance_atr: 1.5
    • strict_side: true
  • workers[1]: Position Sizer
    • type: position_sizer
    • stop_loss_level: workers[0].nearest_support
    • risk_per_trade: 0.02

Automated Trading Bot

Create a fully automated trading system:

  • workflow.name: S&R Trading Bot
  • steps[0]: Fetch data step
    • name: fetch_data
    • worker: market_data_stream
  • steps[1]: Calculate levels step
    • name: calculate_levels
    • worker: support_resistance
    • rowsExpr: steps.fetch_data.result
    • methods: ["ta_extrema", "fibonacci_retracement", "pivot_camarilla"]
    • nearest_only: true
  • steps[2]: Generate signals step
    • name: generate_signals
    • worker: trading_signal_generator
    • support_levels: steps.calculate_levels.support_levels
    • resistance_levels: steps.calculate_levels.resistance_levels
    • current_price: steps.calculate_levels.current_price
  • steps[3]: Execute trades step
    • name: execute_trades
    • worker: order_executor
    • signals: steps.generate_signals.signals

Analysis Methods Comparison

MethodBest ForTimeframeStrengthComplexity
Pivot PointsAll MarketsDaily/WeeklyHighLow
Woodie PivotsTrending MarketsDailyHighLow
CamarillaIntraday/Scalping1min-4hrVery HighMedium
Fibonacci RetracementReversal TradingAllHighMedium
Fibonacci ExtensionsTarget SettingAllHighMedium
TA ExtremaSwing Analysis4hr-DailyVery HighHigh
Price ChannelsTrend FollowingDaily/WeeklyMediumLow
Psychological LevelsRound NumbersAllMediumLow
Recent LevelsShort-term1hr-4hrMediumLow
Swing PointsPattern Recognition4hr-DailyHighHigh

Trading Applications

Entry Signal Generation

Use support/resistance levels for precise entry timing:

  • Breakout Entries: Enter when price breaks above resistance
  • Reversal Entries: Enter at support during oversold conditions
  • Retracement Entries: Buy at Fibonacci support levels in uptrends

Stop Loss Placement

Protect capital with intelligent stop placement:

  • Below Support: Place stops just below identified support levels
  • ATR-Based: Use ATR to set stops at appropriate distances
  • Multiple Levels: Use secondary levels for trailing stops

Profit Target Setting

Define realistic profit objectives:

  • Fibonacci Extensions: Use 1.618 and 2.618 for primary targets
  • Next S/R Level: Target the next resistance in uptrends
  • Risk-Reward Ratio: Ensure minimum 1:2 risk-reward setups

Trend Identification

Determine market direction using level analysis:

  • Higher Lows/Higher Highs: Series of unbroken support levels
  • Lower Highs/Lower Lows: Series of unbroken resistance levels
  • Channel Breaks: Identify trend changes with channel breakouts

Risk Management

Implement sophisticated risk controls:

  • Position Sizing: Adjust position size based on distance to stop levels
  • Portfolio Heat: Monitor exposure across correlated assets
  • Drawdown Control: Reduce risk during losing periods

Best Practices and Tips

Method Selection Guidelines

  • Intraday Trading: Use Camarilla pivots + Fibonacci retracements
  • Swing Trading: Combine TA extrema with price channels
  • Position Trading: Focus on pivot points and psychological levels
  • Volatile Markets: Increase ATR multiplier for filtering
  • Trending Markets: Use Woodie pivots and Fibonacci extensions

Parameter Optimization

  • Lookback Period: 20-50 bars for most applications
  • Order Parameter: 3-7 for swing detection (higher = stronger levels)
  • ATR Multiplier: 1.0-2.0 for proximity filtering
  • Top N: 5-10 levels maximum for clarity

Data Quality Considerations

  • Clean OHLC Data: Ensure accurate high/low/close values
  • Consistent Timeframes: Use data from the same timeframe
  • Gap Handling: Account for overnight/weekend gaps
  • Volume Confirmation: Validate levels with volume analysis

Performance Monitoring

  • Backtesting: Test strategies across different market conditions
  • Forward Testing: Validate in real-time before full deployment
  • Performance Metrics: Track win rate, profit factor, maximum drawdown
  • Regular Review: Adjust parameters based on changing market conditions

Integration with Other Workers

Market Data Sources

Connect with various data providers:

  • workers[0]: FRED Economic Data Connector
    • type: fred_connector
    • series_id: SP500
  • workers[1]: Support & Resistance Calculator
    • type: support_resistance
    • rowsExpr: workers[0].observations
    • methods: ["fibonacci_retracement", "ta_extrema"]

Technical Indicators

Combine with momentum and trend indicators:

  • workers[0]: RSI Calculator
    • type: rsi_calculator
    • period: 14
  • workers[1]: Support & Resistance Calculator
    • type: support_resistance
    • rowsExpr: data.ohlc
    • methods: ["pivot_points"]
  • workers[2]: Signal Generator
    • type: signal_generator
    • rsi: workers[0].rsi
    • support: workers[1].nearest_support
    • resistance: workers[1].nearest_resistance

Alert Systems

Set up automated notifications:

  • workers[0]: Support & Resistance Calculator
    • type: support_resistance
    • rowsExpr: vars.price_data
    • methods: ["ta_extrema"]
    • nearest_only: true
  • workers[1]: Telegram Notifier
    • type: telegram_notifier
    • message: Price approaching resistance at {{workers[0].nearest_resistance}}
    • condition: workers[0].price_position > 0.8

Common Use Cases and Examples

Forex Trading Strategy

EUR/USD intraday setup using multiple methods:

  • symbol: EURUSD
  • timeframe: 1h
  • methods: ["pivot_camarilla", "fibonacci_retracement"]
  • lookback_period: 24
  • max_distance_atr: 1.5

Cryptocurrency Analysis

BTC/USDT swing trading with advanced filtering:

  • symbol: BTCUSDT
  • timeframe: 4h
  • methods: ["ta_extrema", "price_channels"]
  • order: 5
  • atr_mult: 2.0
  • top_n: 6

Stock Market Analysis

Apple Inc. daily analysis with psychological levels:

  • symbol: AAPL
  • timeframe: daily
  • methods: ["pivot_points", "psychological_levels", "fibonacci_retracement"]
  • psych_step: 0
  • psych_count: 3

Commodity Trading

Gold futures with Woodie pivots and extensions:

  • symbol: GC=F
  • timeframe: daily
  • methods: ["pivot_woodie", "fibonacci_extensions"]
  • fib_ext_ratios: "1.618,2.618"

Advanced Features and Customization

Custom Fibonacci Ratios

Define your own Fibonacci ratios for specific strategies:

  • methods: ["fibonacci_extensions"]
  • fib_ext_ratios: "0.786,1.0,1.272,1.618,2.0,2.618,3.0"

Dynamic Lookback Periods

Adjust lookback based on market volatility:

  • lookback_period: vars.adaptive_period
  • methods: ["ta_extrema"]

Multi-Asset Portfolio Analysis

Analyze entire portfolios simultaneously:

  • workers[0]: Portfolio Fetcher
    • type: portfolio_fetcher
    • symbols: ["AAPL", "MSFT", "GOOGL"]
  • workers[1]: Support & Resistance Calculator
    • type: support_resistance
    • rowsExpr: workers[0].data
    • methods: ["pivot_points"]
    • group_by: symbol

Performance Optimization

Efficient Data Processing

  • Batch Processing: Process multiple symbols simultaneously
  • Incremental Updates: Update levels as new data arrives
  • Caching: Cache calculated levels to reduce computation
  • Parallel Execution: Run multiple analysis methods concurrently

Memory Management

  • Data Chunking: Process large datasets in chunks
  • Level Filtering: Remove irrelevant distant levels
  • Result Compression: Compress output for storage efficiency

Troubleshooting and Common Issues

No Levels Generated

Problem: Worker returns empty support/resistance levels Solutions:

  • Check OHLC data format and column mapping
  • Verify sufficient historical data (minimum 5-10 bars)
  • Adjust lookback period for your timeframe
  • Check for data gaps or invalid values

Inconsistent Results

Problem: Levels change significantly between runs Solutions:

  • Use consistent data sources
  • Fix lookback periods and parameters
  • Account for different timeframes
  • Implement data validation checks

Performance Issues

Problem: Slow processing with large datasets Solutions:

  • Reduce lookback periods
  • Use nearest_only filtering
  • Implement data sampling
  • Consider parallel processing

Future Enhancements

We're continuously expanding the Support & Resistance Calculator with:

  • Machine Learning Integration: AI-powered level validation and prediction
  • Intermarket Analysis: Correlation-based level confirmation across assets
  • Volume Profile Integration: Volume-weighted support/resistance zones
  • Order Flow Analysis: Real-time order book level detection
  • Multi-Timeframe Synthesis: Automated level alignment across timeframes
  • Pattern Recognition: Automatic chart pattern detection using S/R levels
  • Sentiment Analysis: News and social media impact on key levels

Important Disclaimer: The Support & Resistance Calculator provides technical analysis tools for informational purposes. The calculated levels and analysis generated by this tool should not be considered as professional financial, investment, or trading advice. All trading decisions should be made based on your own research, risk tolerance, and consultation with qualified financial professionals. Technical analysis is not a guarantee of future performance. Past performance does not guarantee future results. Use this tool at your own risk and responsibility.

Support and resistance levels are fundamental concepts in technical analysis that help traders identify key price levels where buying and selling pressure converge. Whether you're a beginner learning technical analysis or an experienced trader building automated strategies, the Support & Resistance Calculator provides the analytical depth you need to enhance your trading edge.

Questions about implementing support and resistance analysis? Our support team is here to help you integrate these powerful technical tools into your trading workflows! 📈📉💹

FRED Economic Data Connector - Access Federal Reserve Economic Database

· 6 min read
ApudFlow OS
Platform Updates

In the world of financial analysis and economic research, access to reliable economic data is crucial. Introducing the FRED Economic Data Connector - a powerful new addition to the ApudFlow platform that provides seamless access to the Federal Reserve Economic Data (FRED) database, the premier source for US economic time series data.

What is FRED Economic Data Connector?

The FRED Economic Data Connector leverages the comprehensive FRED API maintained by the Federal Reserve Bank of St. Louis. FRED contains over 816,000 economic time series from 108 sources, making it the most comprehensive freely available database of US economic data.

Unlike traditional data providers that require expensive subscriptions, FRED provides free access to critical economic indicators, making sophisticated economic analysis accessible to everyone.

Key Features

  • Comprehensive Data Access: Connect to over 816,000 economic time series
  • Smart Autocomplete: Intelligent series ID suggestions as you type
  • Three Core Operations: Series observations, metadata, and releases
  • Flexible Data Retrieval: Custom date ranges, frequencies, and transformations
  • Federal Reserve Quality: Official data from the Federal Reserve Bank of St. Louis
  • Free API Access: No subscription costs for basic usage
  • Real-time Updates: Access to the latest economic data releases

Core Operations

1. Series Observations - Get Economic Data

Operation: series_observations Purpose: Retrieve actual economic data points for any series

Key Parameters:

  • series_id: The FRED series identifier (with autocomplete)
  • start_date: Start date (optional, format: YYYY-MM-DD)
  • end_date: End date (optional, format: YYYY-MM-DD)
  • frequency: Data frequency - d/daily, w/weekly, m/monthly, q/quarterly, a/annual
  • units: Data transformation - lin/levels, chg/change, pch/percent change

Example - GDP Data:

{
"series_id": "GDPC1",
"start_date": "2020-01-01",
"end_date": "2024-12-31",
"frequency": "q",
"units": "lin"
}

Output:

{
"series_id": "GDPC1",
"count": 20,
"observations": [
{"date": "2024-07-01", "value": "22679.3"},
{"date": "2024-04-01", "value": "22605.4"},
{"date": "2024-01-01", "value": "22496.7"}
]
}

2. Series Information - Metadata & Details

Operation: series_info Purpose: Get detailed information about a data series

Key Parameters:

  • series_id: The FRED series identifier (with autocomplete)

Example - Get GDP Metadata:

{
"series_id": "GDPC1"
}

Output:

{
"series_id": "GDPC1",
"title": "Real Gross Domestic Product",
"units": "Billions of Chained 2012 Dollars",
"frequency": "Quarterly",
"seasonal_adjustment": "Seasonally Adjusted Annual Rate",
"last_updated": "2024-10-30 07:53:02-05",
"observation_start": "1947-01-01",
"observation_end": "2024-07-01"
}

3. Economic Releases - Latest Data Updates

Operation: releases Purpose: Get information about recent economic data releases

Key Parameters:

  • limit: Number of releases to return (default: 25, max: 1000)

Example - Recent Releases:

{
"limit": 10
}

Output:

{
"count": 10,
"releases": [
{
"id": "53",
"name": "Gross Domestic Product",
"press_release": true,
"link": "https://www.federalreserve.gov/releases/gdp/",
"notes": "The Gross Domestic Product (GDP) is the market value of goods and services produced by labor and property in the United States."
}
]
}

Smart Autocomplete for Series Discovery

The FRED connector features intelligent autocomplete that helps you discover series IDs as you type. Simply start typing keywords like "GDP", "unemployment", "inflation", or "treasury" and get instant suggestions.

Popular Series Examples:

  • GDPC1 - Real Gross Domestic Product
  • UNRATE - Unemployment Rate
  • CPIAUCSL - Consumer Price Index
  • DGS10 - 10-Year Treasury Rate
  • MORTGAGE30US - 30-Year Mortgage Rate
  • DEXUSEU - US/Euro Exchange Rate
  • FEDFUNDS - Federal Funds Rate
  • HOUST - Housing Starts
  • INDPRO - Industrial Production Index

Data Transformation Options

FRED connector supports various data transformations:

  • Units: lin (levels), chg (change), pch (percent change)
  • Frequency: d (daily), w (weekly), m (monthly), q (quarterly), a (annual)
  • Date Ranges: Custom start/end dates for historical analysis

Practical Implementation Examples

Automated Economic Dashboard

Create a workflow that:

  1. Fetches latest GDP data using series_observations
  2. Retrieves unemployment rates with series_info for metadata
  3. Monitors inflation metrics with custom date ranges
  4. Generates economic health score
  5. Sends alerts on significant changes

Trading Strategy Integration

Build algorithmic trading strategies based on:

  • Federal Funds Rate changes (FEDFUNDS)
  • Employment data releases (PAYEMS)
  • GDP growth indicators (GDPC1)
  • Inflation metrics (CPIAUCSL)

Risk Management System

Monitor economic indicators for:

  • Recession signals using GDP data
  • Inflation pressure with CPI series
  • Labor market health via unemployment data
  • Currency stability with exchange rates

Getting Started

Ready to harness the power of economic data?

  1. Access FRED Connector in your ApudFlow workspace
  2. Start typing series IDs - use autocomplete to discover economic indicators
  3. Try popular series like GDPC1 (GDP), UNRATE (unemployment), or CPIAUCSL (inflation)
  4. Experiment with parameters - add date ranges, change frequencies, apply transformations
  5. Get series metadata using series_info to understand your data
  6. Monitor releases with the releases operation for latest updates
  7. Build automated workflows for economic analysis and alerts
Series IDDescriptionFrequencyUnits
GDPC1Real Gross Domestic ProductQuarterlyBillions of Chained 2012 Dollars
UNRATEUnemployment RateMonthlyPercent
CPIAUCSLConsumer Price IndexMonthlyIndex 1982-84=100
FEDFUNDSFederal Funds RateMonthlyPercent
DGS1010-Year Treasury RateDailyPercent
MORTGAGE30US30-Year Mortgage RateWeeklyPercent
DEXUSEUUS/Euro Exchange RateDailyUS Dollars per Euro
HOUSTHousing StartsMonthlyThousands of Units
INDPROIndustrial Production IndexMonthlyIndex 2017=100
PAYEMSAll Employees, Total NonfarmMonthlyThousands of Persons

API Limits and Best Practices

  • Without API Key: 2,000 requests per hour
  • With API Key: 120,000 requests per hour (optional)
  • Data Freshness: Most series update within 1-2 business days
  • Caching: Consider caching frequently used data
  • Rate Limiting: Built-in delays prevent API limit violations

Future Enhancements

We're continuously expanding FRED connector capabilities with:

  • Real-time data streaming for live economic indicators
  • Bulk data operations for large historical datasets
  • Advanced analytics integration with economic models
  • Multi-series correlation analysis tools
  • Automated report generation from economic data

Important Disclaimer: FRED Economic Data Connector provides access to economic and financial data for informational purposes. The data and analysis generated by this tool should not be considered as professional financial, investment, or trading advice. All investment decisions should be made based on your own research, risk tolerance, and consultation with qualified financial professionals. Economic data may be revised historically. Use this tool at your own risk and responsibility.

Federal Reserve Economic Data (FRED) is the premier source for US economic time series data. Whether you're building economic models, monitoring market conditions, or conducting financial research, FRED connector provides the economic intelligence you need to make informed decisions.

Questions about implementing FRED connector? Our support team is here to help you integrate economic data into your workflows! 📊💹📈

AI Classifier - Intelligent Decision Making for Your Workflows

· 5 min read
ApudFlow OS
Platform Updates

Introducing the AI Classifier worker - a powerful new addition to the ApudFlow platform that brings intelligent decision-making capabilities to your workflows. Using advanced AI models, this worker can analyze complex data patterns and make classification decisions that drive your automated processes.

What is AI Classifier?

The AI Classifier worker leverages large language models to analyze data and classify it according to your specific instructions. Unlike traditional rule-based classifiers, AI Classifier can understand context, recognize patterns, and make nuanced decisions based on natural language prompts.

Key Features

  • Flexible Classification: Define your own classification criteria and options
  • Context-Aware Analysis: Processes complex data structures and understands relationships
  • Multiple AI Models: Choose from various AI models for different use cases
  • Workflow Integration: Seamlessly integrates with existing workflow logic
  • Real-time Processing: Fast classification for time-sensitive decisions

Financial Markets Applications

AI Classifier excels in financial data analysis and automated trading scenarios. Here are some powerful use cases:

1. Stock Market Classification

Automatically classify stocks based on their characteristics:

Prompt: "Classify this stock data as: gold, nasdaq, crypto, forex, commodities"

Use Case: Route different types of financial instruments to specialized analysis workflows.

2. Market Sentiment Analysis

Analyze news articles and social media sentiment:

Prompt: "Analyze the sentiment of this financial news: bullish, bearish, neutral"

Use Case: Automatically adjust trading strategies based on market sentiment.

3. Trading Signal Generation

Generate buy/sell/hold signals from technical indicators:

Prompt: "Based on RSI, MACD, and volume indicators, generate signal: buy, sell, hold"

Use Case: Create automated trading systems that respond to technical analysis.

4. Risk Assessment

Evaluate investment risk levels:

Prompt: "Assess risk level based on volatility, beta, and Sharpe ratio: low, medium, high, extreme"

Use Case: Implement dynamic risk management in investment portfolios.

5. Market Regime Detection

Identify current market conditions:

Prompt: "Classify current market regime: trending_bullish, trending_bearish, ranging, volatile, calm"

Use Case: Switch between different trading strategies based on market conditions.

6. News Impact Classification

Determine the significance of financial news:

Prompt: "Classify the impact of this news on markets: major, moderate, minor, irrelevant"

Use Case: Filter and prioritize news feeds for faster decision making.

7. Asset Allocation Recommendations

Suggest portfolio allocations:

Prompt: "Recommend asset allocation based on risk profile: conservative, balanced, aggressive"

Use Case: Automate portfolio rebalancing based on changing market conditions.

How to Use AI Classifier

Basic Setup

  1. Add AI Classifier to your workflow canvas
  2. Configure the prompt with your classification instructions
  3. Specify data source using the dataExp field
  4. Connect to decision branches based on classification results

Example Workflow: Stock Analysis Pipeline

[Data Fetcher] → [AI Classifier: "gold, nasdaq, crypto"]

┌─────────┴─────────┐
│ │
[Gold Analysis] [Stock Analysis]
↓ ↓
[Gold Strategies] [Tech Strategies]

Advanced Configuration

Prompt Engineering Tips:

  • Be specific about classification criteria
  • Include examples in your prompts
  • Define clear decision boundaries
  • Test with sample data before deployment

Data Input Options:

  • Direct data objects
  • Context expressions (data.price, vars.indicators)
  • Complex nested structures
  • Real-time market data feeds

Technical Implementation

The AI Classifier uses OpenRouter's API to access multiple AI models including:

  • Meta Llama 3.1 (recommended for financial analysis)
  • GPT-4 (for complex reasoning)
  • Claude (for nuanced decision making)
  • Other specialized models

Response Processing:

  • Automatic extraction of clean decisions
  • Removal of AI explanations and reasoning
  • Consistent output format for workflow integration

Performance & Reliability

  • Low Latency: Optimized for real-time decision making
  • Error Handling: Graceful fallbacks and error reporting
  • Cost Effective: Efficient token usage and model selection
  • Scalable: Handles high-frequency trading scenarios

Real-World Success Stories

Automated Trading Bot

A quantitative trading firm implemented AI Classifier to automatically categorize incoming market data and route it to specialized analysis engines, reducing manual classification time by 95%.

Risk Management System

An investment bank uses AI Classifier to assess risk levels of new positions in real-time, ensuring compliance with regulatory requirements and internal risk policies.

News-Driven Trading

A hedge fund employs AI Classifier to analyze breaking financial news and automatically adjust portfolio positions based on sentiment and impact analysis.

Getting Started

Ready to add intelligent decision-making to your workflows?

  1. Access AI Classifier in your ApudFlow workspace
  2. Start with simple classifications to understand the capabilities
  3. Gradually increase complexity as you become familiar with prompt engineering
  4. Integrate with existing workflows for enhanced automation

Future Enhancements

We're continuously improving AI Classifier with:

  • Custom model fine-tuning options
  • Batch processing capabilities
  • Advanced prompt templates
  • Integration with more AI providers
  • Specialized financial analysis models

Important Disclaimer: AI Classifier is a tool for automated decision-making and classification. The classifications and decisions generated by this tool should not be considered as professional financial, investment, or trading advice. All investment decisions should be made based on your own research, risk tolerance, and consultation with qualified financial professionals. Past performance does not guarantee future results. Use this tool at your own risk and responsibility.

AI Classifier represents the next evolution in workflow automation, bringing AI-powered intelligence to decision-making processes. Whether you're building trading systems, risk management platforms, or automated data processing pipelines, AI Classifier provides the intelligent routing capabilities you need.

Have questions or need help implementing AI Classifier in your workflows? Reach out to our support team!

AI Data Analyzer - Transform Raw Data into Strategic Intelligence

· 8 min read
ApudFlow OS
Platform Updates

In an era of data abundance, the real challenge lies not in collecting information, but in extracting meaningful insights that drive intelligent decisions. Introducing the AI Data Analyzer worker - a sophisticated AI-powered tool that transforms raw data into actionable intelligence, trends, and strategic recommendations.

What is AI Data Analyzer?

The AI Data Analyzer worker employs advanced machine learning algorithms to analyze datasets, identify patterns, detect anomalies, and generate data-driven insights. Unlike traditional analytics tools, AI Data Analyzer understands context, recognizes complex relationships, and provides human-like interpretation of data with actionable recommendations.

Key Features

  • Multi-Type Analysis: Choose from general, financial, trend, anomaly, and predictive analysis modes
  • Flexible Detail Levels: Control analysis depth from brief summaries to comprehensive reports
  • Intelligent Pattern Recognition: Automatically identifies trends, correlations, and anomalies
  • Contextual Recommendations: Generates actionable insights based on data analysis
  • Financial Expertise: Specialized algorithms for market data and investment analysis

Financial Markets Applications

AI Data Analyzer excels in financial data analysis, providing sophisticated insights that inform investment decisions and risk management strategies.

1. Stock Market Trend Analysis

Input Data: Historical price data, volume, technical indicators (RSI, MACD, Bollinger Bands) Analysis Type: Trend Analysis Detail Level: Detailed

Example Analysis Output:

Key Findings:
- Strong upward trend identified with 68% momentum strength
- Support level established at $142.50, resistance at $158.20
- Volume confirms trend direction with increasing participation

Trends & Patterns:
- Primary trend: Bullish (confirmed by moving averages alignment)
- Secondary trend: Short-term consolidation forming potential cup pattern
- Seasonal pattern: Q4 typically shows 12-15% gains historically

Anomalies:
- Unusual volume spike on October 15th (2.3x average) - potential institutional accumulation
- Price gap down on October 22nd requires monitoring for retest

Recommendations:
- Maintain long position with stop loss at $145.00
- Consider adding to position on pullbacks to support levels
- Monitor volume patterns for continuation signals
- Watch for breakout above $158.20 for accelerated gains

Risks/Considerations:
- Market volatility increased 23% in past 30 days
- Earnings report due in 2 weeks could cause volatility
- Broader market correlation at 0.78 - watch S&P 500 direction

2. Transaction Anomaly Detection

Input Data: Trading transaction logs, account activities, timing patterns Analysis Type: Anomaly Detection Detail Level: Detailed

Example Analysis Output:

Key Findings:
- Identified 12 high-risk transactions requiring immediate review
- Pattern suggests potential coordinated trading activity
- Risk score elevated to "Critical" level

Trends & Patterns:
- Normal trading hours: 9:30 AM - 4:00 PM EST
- Anomalous activity: 3:15 AM - 5:30 AM EST (off-hours trading)
- Transaction sizes: 85% within normal range, 15% significantly larger

Anomalies Detected:
1. **Large Block Trade**: $2.8M transaction at 4:15 AM - 3.2x average size
2. **Frequent Small Trades**: 47 transactions in 15 minutes from single account
3. **Round Number Pattern**: Multiple $100,000 exact trades (potential spoofing)
4. **Geographic Anomaly**: Trades originating from unusual IP locations
5. **Timing Pattern**: Transactions clustered around news events

Recommendations:
- Freeze suspicious accounts pending investigation
- Implement enhanced KYC checks for flagged accounts
- Review trading algorithms for potential manipulation patterns
- Alert compliance team for regulatory reporting requirements

Risks/Considerations:
- Potential market impact if large positions are unwound
- Legal implications of false positive identifications
- Customer relationship management for legitimate high-volume traders

3. Investment Portfolio Optimization

Input Data: Current holdings, risk metrics, correlation matrix, market data Analysis Type: Predictive Detail Level: Detailed

Example Analysis Output:

Key Findings:
- Portfolio volatility at 18.5% (above target 15%)
- Technology sector overweight at 42% vs target 30%
- Expected return: 12.3% vs benchmark 10.8%

Trends & Patterns:
- Momentum stocks outperforming value stocks by 8.2%
- Interest rate sensitivity increased due to duration exposure
- ESG factors showing positive correlation with returns

Anomalies:
- Single stock concentration risk: AAPL represents 8.5% of portfolio
- Sector correlation breakdown during market stress periods
- Options positioning creates asymmetric risk profile

Recommendations:
Portfolio Rebalancing Actions:
1. **Reduce Technology Exposure**: Sell 12% of tech holdings ($2.1M)
2. **Increase Diversification**: Add emerging markets exposure (5%)
3. **Hedging Strategy**: Implement put options for downside protection
4. **Sector Rotation**: Shift from growth to quality/value stocks

Optimal Allocation Suggestion:
- US Large Cap: 35% (current: 42%)
- International Developed: 20% (current: 15%)
- Emerging Markets: 15% (current: 10%)
- Fixed Income: 20% (current: 18%)
- Alternatives: 10% (current: 15%)

Risks/Considerations:
- Transaction costs of rebalancing: estimated $45K
- Tax implications of realized gains
- Market timing risk if executed during volatility
- Model assumptions may not hold in extreme scenarios

4. Market Regime Classification

Input Data: Multi-asset returns, volatility measures, economic indicators Analysis Type: Financial Detail Level: Standard

Example Analysis Output:

Key Findings:
- Current market regime: "Risk-On" with bullish momentum
- Regime confidence: 78% based on indicator alignment
- Expected duration: 3-6 months before potential transition

Trends & Patterns:
- Equity markets: +12% YTD with low volatility (VIX: 14.2)
- Credit spreads: Tightening trend continuing
- Economic data: Improving PMI readings across sectors
- Currency movements: Risk-on currencies strengthening

Anomalies:
- Bond yields not following typical risk-on pattern
- Commodity prices showing mixed signals
- Some sectors lagging broader market performance

Recommendations:
- Maintain overweight in equities vs bonds
- Favor cyclical sectors (Financials, Industrials, Materials)
- Reduce defensive positions (Utilities, Consumer Staples)
- Consider leveraged exposure through futures/options
- Monitor leading indicators for regime change signals

Risks/Considerations:
- Central bank policy uncertainty remains elevated
- Geopolitical tensions could trigger rapid regime shift
- Valuation metrics approaching historical peaks

5. Customer Behavior Analysis

Input Data: Transaction history, browsing patterns, demographic data Analysis Type: General Detail Level: Detailed

Example Analysis Output:

Key Findings:
- Customer lifetime value increased 23% YoY
- Churn rate decreased to 4.2% (industry average: 6.8%)
- High-value segment shows 34% engagement increase

Trends & Patterns:
- Mobile app usage up 45% since last quarter
- Weekend activity increased 28% vs weekdays
- Age group 25-34 shows highest engagement (52% of transactions)
- Subscription upgrades concentrated in Q4

Anomalies:
- Sudden drop in engagement from enterprise segment (-15%)
- Unusual spike in support tickets from single user group
- Geographic shift in user acquisition patterns

Recommendations:
- Launch targeted mobile marketing campaign
- Develop weekend-specific promotions
- Create loyalty program for 25-34 demographic
- Investigate enterprise segment concerns
- Optimize Q4 upgrade incentives

Risks/Considerations:
- Privacy regulations impact data collection capabilities
- Economic factors may affect high-value segment behavior
- Competitive landscape changes could impact retention

How to Use AI Data Analyzer

Basic Setup

  1. Add AI Data Analyzer to your workflow
  2. Input your dataset (JSON, CSV, or structured text)
  3. Select analysis type based on your objectives
  4. Choose detail level for appropriate depth
  5. Review AI-generated insights and recommendations

Advanced Configuration

Analysis Type Selection:

  • General: Broad pattern recognition and insights
  • Financial: Market-specific analysis with investment context
  • Trend Analysis: Focus on directional movements and momentum
  • Anomaly Detection: Identify outliers and unusual patterns
  • Predictive: Forecast future scenarios and opportunities

Detail Level Optimization:

  • Brief: Executive summaries for quick decisions
  • Standard: Balanced analysis for most use cases
  • Detailed: Comprehensive reports for deep analysis

Technical Implementation

AI Model: Specialized Llama 3.1 model optimized for analytical tasks Data Processing: Handles structured and unstructured data up to 50MB Analysis Speed: Real-time processing for most datasets Output Format: Structured insights with clear recommendations

Real-World Success Stories

Quantitative Hedge Fund

A systematic trading firm implemented AI Data Analyzer to process multi-asset market data, identifying trading opportunities 40% faster than traditional methods while reducing false signals by 60%.

Retail Banking Institution

A major bank uses AI Data Analyzer to monitor transaction patterns, successfully identifying and preventing $2.3M in potential fraudulent activities within the first year.

Asset Management Company

An investment firm employs AI Data Analyzer for portfolio optimization, achieving 2.1% annual outperformance vs benchmark through data-driven rebalancing decisions.

FinTech Startup

A financial technology company integrated AI Data Analyzer into their robo-advisor platform, improving client satisfaction scores by 35% through personalized, data-driven recommendations.

Integration Examples

Automated Trading System

[Market Data Feed] → [AI Data Analyzer: "trend_analysis"] → [Trading Signal Engine]

[Risk Management System]

Fraud Detection Pipeline

[Transaction Stream] → [AI Data Analyzer: "anomaly_detection"] → [Alert System]

[Investigation Queue]

Portfolio Management Workflow

[Market Data + Holdings] → [AI Data Analyzer: "predictive"] → [Rebalancing Engine]

[Client Reporting]

Future Enhancements

We're continuously evolving AI Data Analyzer with:

  • Real-time streaming analysis for live data processing
  • Custom model training for domain-specific analysis
  • Multi-modal analysis combining text, numbers, and images
  • Collaborative features for team analysis workflows
  • Integration APIs for third-party data sources

Getting Started

Ready to unlock the power of intelligent data analysis?

  1. Access AI Data Analyzer in your ApudFlow workspace
  2. Prepare your dataset in structured format
  3. Start with sample analyses to understand capabilities
  4. Integrate into decision workflows for enhanced intelligence
  5. Monitor and refine analysis parameters based on results

Important Disclaimer: AI Data Analyzer is a tool for data analysis and pattern recognition. The insights and recommendations generated by this tool should not be considered as professional financial, investment, or trading advice. All investment decisions should be made based on your own research, risk tolerance, and consultation with qualified financial professionals. Past performance does not guarantee future results. Use this tool at your own risk and responsibility.

AI Data Analyzer represents the future of data-driven decision making, transforming overwhelming datasets into clear, actionable intelligence. Whether you're managing investments, detecting fraud, or optimizing business processes, AI Data Analyzer provides the analytical power you need to stay ahead.

AI Summarizer - Transform Long Content into Actionable Insights

· 7 min read
ApudFlow OS
Platform Updates

In today's information-overloaded world, the ability to quickly distill large volumes of content into concise, actionable insights is invaluable. Introducing the AI Summarizer worker - a powerful new addition to the ApudFlow platform that uses advanced AI to condense text, reports, and data while preserving critical information.

What is AI Summarizer?

The AI Summarizer worker leverages state-of-the-art language models to analyze and condense lengthy content into focused summaries. Unlike simple text truncation, AI Summarizer understands context, identifies key themes, and creates coherent summaries that capture the essence of the original material.

Key Features

  • Multiple Summary Types: Choose from concise, detailed, bullet-point, executive, and financial summaries
  • Intelligent Content Analysis: AI understands context and preserves critical information
  • Customizable Length: Control summary length from 50 to 1000 words
  • Focus Areas: Specify particular aspects to emphasize in summaries
  • Financial Expertise: Specialized handling of financial reports and market data

Financial Markets Applications

AI Summarizer excels in processing financial content and transforming it into digestible insights for faster decision-making.

1. Earnings Report Summarization

Input: 50-page quarterly earnings report Prompt: "Create an executive summary focusing on revenue growth, profitability, and future guidance"

Example Output:

Q3 2025 Earnings Summary:
- Revenue increased 15% YoY to $2.8B, driven by cloud services growth
- Operating margin improved to 28% from 24%, exceeding analyst expectations
- FY2026 guidance raised: Revenue $11.2-11.6B (up from $10.8-11.2B)
- Key drivers: AI adoption, international expansion, cost optimization
- Risks: Economic uncertainty, currency fluctuations, competitive pressures

2. Analyst Report Synthesis

Input: Multiple 20-30 page analyst reports on the same stock Prompt: "Summarize key analyst recommendations, price targets, and investment theses"

Example Output:

Analyst Consensus on AAPL (Based on 12 reports):
• Average Price Target: $245 (12-month upside: 18%)
• Recommendation Breakdown: Buy (67%), Hold (25%), Sell (8%)
• Key Investment Themes:
- AI integration driving services revenue growth
- China market recovery and new product cycles
- Margin expansion through cost efficiencies
• Risk Factors: Supply chain disruptions, regulatory scrutiny, competition

3. News Aggregation and Impact Analysis

Input: 20+ financial news articles about a company or sector Prompt: "Summarize market-moving news and assess overall sentiment impact"

Example Output:

Market News Summary - Tech Sector (October 27, 2025):
• **Major Developments:**
- Meta announces breakthrough in AI chip technology (+5% stock reaction)
- Google Cloud signs $5B enterprise contract with Fortune 500 company
- Apple delays AR/VR headset launch due to supply chain issues (-2% impact)

• **Overall Sentiment:** Neutral to Positive
• **Market Impact:** Tech ETF up 1.2%, driven by AI and cloud announcements
• **Key Takeaway:** Innovation momentum outweighs short-term execution challenges

4. SEC Filing Analysis

Input: 100+ page 10-K or 10-Q filing Prompt: "Extract and summarize key financial metrics, business developments, and risk factors"

Example Output:

10-K Key Highlights - Johnson & Johnson:

**Financial Performance:**
- Total Revenue: $87.6B (+3.2% YoY)
- Pharmaceutical: $52.1B (+8.1%), Medical Devices: $27.8B (-1.2%)
- Net Earnings: $4.2B, EPS: $6.84

**Business Developments:**
- Completed $2.1B acquisition of specialized biotech company
- Launched 3 new pharmaceutical products generating $800M in revenue
- Expanded digital health platform to 15M users

**Risk Factors:**
- Generic competition pressure on key products
- Regulatory scrutiny on pricing practices
- Supply chain vulnerabilities in medical devices
- Healthcare reform uncertainty

5. Earnings Call Transcript Analysis

Input: 2-hour earnings call transcript (50,000 words) Prompt: "Summarize management commentary on strategy, outlook, and key initiatives"

Example Output:

Management Commentary Summary:

**Strategic Priorities:**
- Accelerate digital transformation across all business units
- Expand into adjacent markets through strategic acquisitions
- Invest $2B in R&D for next-generation technologies

**Financial Outlook:**
- Q4 revenue guidance: $8.2-8.5B (consensus $8.1B)
- FY2026 revenue growth target: 12-15%
- Operating margin expansion to 25% by 2027

**Key Initiatives:**
- Launch AI-powered customer service platform by Q2 2026
- Complete 3 strategic acquisitions in healthcare sector
- Achieve carbon neutrality by 2030 across global operations

6. Research Report Condensation

Input: Comprehensive industry research report (200 pages) Prompt: "Create executive summary highlighting market size, growth drivers, and competitive landscape"

Example Output:

Global FinTech Market Research Summary:

**Market Size & Growth:**
- 2025 Market Size: $150B
- CAGR 2025-2030: 18.5%
- Key Growth Drivers: Digital transformation, regulatory changes, consumer demand

**Market Segments:**
- Payments: $60B (40% of market)
- Lending: $35B (23%)
- Wealth Management: $25B (17%)
- Insurance Tech: $20B (13%)
- Other: $10B (7%)

**Competitive Landscape:**
- Top 5 players control 45% market share
- Consolidation trend with 25 major acquisitions in 2025
- Emerging players from Asia gaining market share

**Investment Opportunities:**
- AI-driven personalization platforms
- Cross-border payment solutions
- Regulatory technology (RegTech)
- Embedded finance solutions

7. Risk Assessment Report Summary

Input: Comprehensive risk assessment document Prompt: "Summarize top risks, mitigation strategies, and overall risk posture"

Example Output:

Enterprise Risk Assessment Summary:

**Top Risks (High Impact):**
1. **Cybersecurity Threats:** Ransomware attacks on critical infrastructure
- Mitigation: Enhanced security protocols, regular penetration testing
- Status: Medium risk with active mitigation

2. **Supply Chain Disruptions:** Semiconductor shortage affecting production
- Mitigation: Diversified supplier network, inventory optimization
- Status: High risk, requires immediate attention

3. **Regulatory Changes:** New financial reporting standards implementation
- Mitigation: Dedicated compliance team, technology upgrades
- Status: Medium risk with defined action plan

**Overall Risk Posture:** Moderate
**Risk Trend:** Improving (down from High 6 months ago)
**Key Recommendations:** Accelerate digital transformation, enhance supplier diversification

How to Use AI Summarizer

Basic Setup

  1. Add AI Summarizer to your workflow canvas
  2. Input your content (text, reports, articles)
  3. Choose summary type based on your needs
  4. Set length and focus areas as needed
  5. Connect to downstream processing or output

Advanced Configuration

Summary Types:

  • Concise: Brief overview for quick understanding
  • Detailed: Comprehensive analysis with context
  • Bullet Points: Structured format for easy scanning
  • Executive: Decision-maker focused summaries
  • Financial: Emphasis on financial metrics and implications

Optimization Tips:

  • Focus Areas: Specify "financial metrics, risks, outlook" for targeted summaries
  • Length Control: Use shorter summaries for alerts, longer ones for deep analysis
  • Content Type: Different approaches for news vs. reports vs. transcripts

Technical Implementation

AI Model: Optimized Llama 3.1 model for summarization tasks Processing: Handles documents up to 50,000 words Output Formats: Clean text summaries with metadata Performance: Sub-second processing for typical documents

Real-World Success Stories

Investment Research Firm

A quantitative investment firm implemented AI Summarizer to process 200+ research reports daily, reducing analyst reading time by 75% while maintaining 98% accuracy in key insight extraction.

Financial News Aggregator

A financial news platform uses AI Summarizer to condense breaking news into 100-word summaries, enabling real-time distribution to 500,000+ subscribers.

Compliance Department

A major bank's compliance team employs AI Summarizer to review regulatory filings and risk reports, identifying critical issues 60% faster than manual review.

Asset Management Company

An asset manager uses AI Summarizer to analyze earnings presentations, extracting key investment insights that inform portfolio allocation decisions.

Integration Examples

Automated Report Pipeline

[Data Collection] → [AI Summarizer: "executive"] → [Email Distribution]

[Database Storage]

Real-time News Processing

[News Feed] → [AI Summarizer: "concise"] → [Trading Signal Engine]

[Sentiment Database]

Research Workflow

[Research Reports] → [AI Summarizer: "detailed"] → [Investment Committee]

[Portfolio Updates]

Future Enhancements

We're continuously improving AI Summarizer with:

  • Multi-language support for global content processing
  • Custom model training for domain-specific summarization
  • Batch processing capabilities for large document sets
  • Integration APIs for third-party content sources
  • Advanced analytics on summarization quality and insights

Getting Started

Ready to transform your content processing workflow?

  1. Access AI Summarizer in your ApudFlow workspace
  2. Start with sample content to understand summarization quality
  3. Experiment with different summary types for various use cases
  4. Integrate into existing workflows for enhanced productivity

Important Disclaimer: AI Summarizer is a tool for content condensation and summarization. The summaries generated by this tool should not be considered as professional financial, investment, or trading advice. All investment decisions should be made based on your own research, risk tolerance, and consultation with qualified financial professionals. Past performance does not guarantee future results. Use this tool at your own risk and responsibility.

AI Summarizer represents the next evolution in content processing, enabling professionals to stay informed without being overwhelmed. Whether you're analyzing financial reports, monitoring market news, or processing research documents, AI Summarizer provides the intelligent condensation you need to make faster, better decisions.

Questions about implementing AI Summarizer? Our support team is here to help you optimize your content processing workflows! 🚀📊

Introducing Wait for Workers - Workflow Synchronization Made Easy

· 3 min read
ApudFlow OS
Platform Updates

Workflow synchronization just got a whole lot easier with our new Wait for Workers worker! This powerful addition to the ApudFlow platform allows you to coordinate parallel workflow branches and ensure operations run in the correct order.

What is Wait for Workers?

The Wait for Workers worker monitors the execution status of other workers in your workflow and waits until all specified workers have completed their tasks. It's perfect for scenarios where you need to:

  • Synchronize parallel data processing branches
  • Wait for multiple API calls to complete
  • Coordinate dependent operations
  • Ensure data availability before proceeding

How It Works

Simply connect workers to your Wait for Workers node, and it will automatically detect and monitor all connected workers. No manual configuration needed!

Manual Mode

For advanced use cases, you can manually specify worker IDs to wait for specific workers that may not be directly connected.

Key Features

  • Automatic Detection: Intelligently detects connected workers from workflow topology
  • Real-time Monitoring: Periodically checks worker status in the database
  • Timeout Protection: Configurable timeout to prevent infinite waiting
  • Error Handling: Optional failure on any worker error
  • Parallel Coordination: Perfect for synchronizing multiple parallel branches

Configuration Parameters

ParameterTypeDefaultDescription
worker_idsarray[]Worker IDs to wait for (leave empty for auto-detection)
check_intervalnumber1.0Seconds between status checks
timeoutnumber300.0Maximum wait time in seconds (0 = no limit)
fail_on_errorbooleanfalseFail immediately if any worker encounters an error

Return Values

The worker returns a comprehensive status report:

{
"completed": ["worker_id_1", "worker_id_2"],
"failed": [],
"timeout": false,
"total_waited": 2.5,
"auto_detected": true
}

Example Use Case

Imagine you have a workflow that:

  1. Fetches stock data from Yahoo Finance
  2. Simultaneously processes the data with an LLM for analysis
  3. Needs both results before generating a final report

With Wait for Workers, you can ensure the final report generation waits for both the data fetch AND the LLM analysis to complete.

Getting Started

  1. Add a "Wait for Workers" node to your workflow
  2. Connect your parallel workers to the wait node
  3. Configure timeout and error handling preferences
  4. Connect the wait node to your downstream processing

The worker will automatically detect and wait for all connected workers to complete!

Watch the Tutorial

For a visual guide on how to use the Wait for Workers worker, check out our tutorial video:

How to Use Wait for Workers

This new worker significantly simplifies workflow coordination and makes building complex, parallel processing pipelines much more intuitive. Try it out in your next workflow!

MongoDB Document Storage Now Available

· 5 min read

ApudFlow now supports MongoDB for document storage with built-in user isolation. Whether you're building user profiles, storing analytics data, or managing content, MongoDB workers provide flexible document storage with automatic security.

What is MongoDB?

MongoDB is a document database that stores data in flexible, JSON-like documents. Unlike traditional relational databases, MongoDB allows you to store complex, nested data structures without predefined schemas.

Key Features

Automatic User Isolation

Every document is automatically tagged with your user ID, ensuring complete data separation between users. No cross-tenant data leaks or security concerns.

Two Connection Modes

  • Internal Mode: Managed connection with a single built-in collection. No setup required - start storing data instantly.
  • External Mode: Connect to your own MongoDB database and collections for full control.

Rich Query Capabilities

Perform complex queries using MongoDB's powerful query language. Filter by any field, use regex patterns, and aggregate data across documents.

What Can You Use It For?

User Data Management

Store user profiles, preferences, and settings. Perfect for applications that need to maintain user-specific data across sessions.

Example: Store user trading preferences
- operation: insert
- document: {"theme": "dark", "defaultTimeframe": "1H", "alertsEnabled": true}
Example: User notification settings
- operation: insert
- document: {
"emailNotifications": true,
"smsAlerts": false,
"notificationTypes": ["price_alerts", "news", "portfolio_updates"],
"quietHours": {"start": "22:00", "end": "08:00"}
}

Analytics and Metrics

Collect and analyze usage data, performance metrics, or business intelligence data. MongoDB's flexible schema makes it easy to evolve your data structure as needs change.

Example: Track user interactions
- operation: insert
- document: {"action": "chart_view", "symbol": "AAPL", "timestamp": "2025-10-21T10:00:00Z"}
Example: Performance analytics
- operation: insert
- document: {
"workflowId": "market_scanner",
"executionTime": 2.3,
"dataPoints": 15420,
"success": true,
"timestamp": "2025-10-21T10:00:00Z"
}
Example: A/B test results
- operation: insert
- document: {
"testId": "new_ui_layout",
"variant": "A",
"userId": "user123",
"conversion": true,
"timeSpent": 45
}

Content Management

Store articles, blog posts, comments, or any structured content. The document model is perfect for content with varying fields and nested data.

Example: Save analysis reports
- operation: insert
- document: {
"title": "Q4 Earnings Analysis",
"content": "...",
"tags": ["earnings", "analysis"],
"metadata": {"wordCount": 1250, "readTime": 6}
}
Example: Product catalog
- operation: insert
- document: {
"name": "Premium Analytics Package",
"description": "Advanced market analysis tools",
"pricing": {"monthly": 49.99, "yearly": 499.99},
"features": ["real-time data", "custom indicators", "alerts"],
"category": "analytics"
}

Workflow State Persistence

Save intermediate workflow results, cache expensive computations, or maintain state between workflow runs.

Example: Cache API responses
- operation: insert
- document: {"endpoint": "/api/market-data", "response": {...}, "cachedAt": "2025-10-21T10:00:00Z"}
Example: Workflow checkpoints
- operation: insert
- document: {
"workflowId": "data_import_001",
"step": "data_validation",
"progress": 75,
"lastProcessedId": 15420,
"errors": []
}

E-commerce Applications

Manage product inventories, customer orders, and shopping cart data with complex nested structures.

Example: Order management
- operation: insert
- document: {
"orderId": "ORD-2025-001",
"customerId": "user123",
"items": [
{"productId": "AAPL", "quantity": 100, "price": 150.25},
{"productId": "GOOGL", "quantity": 50, "price": 2800.00}
],
"total": 140125.00,
"status": "pending"
}

IoT and Sensor Data

Store time-series data from sensors, devices, or monitoring systems with flexible schemas.

Example: Sensor readings
- operation: insert
- document: {
"deviceId": "sensor_001",
"timestamp": "2025-10-21T10:00:00Z",
"readings": {
"temperature": 23.5,
"humidity": 65.2,
"pressure": 1013.25
},
"location": {"lat": 40.7128, "lng": -74.0060}
}

Gaming and User Progress

Track player statistics, achievements, inventory, and game state with nested data structures.

Example: Player profile
- operation: insert
- document: {
"playerId": "player123",
"level": 25,
"experience": 15420,
"inventory": [
{"item": "golden_pickaxe", "quantity": 1},
{"item": "diamond_ore", "quantity": 50}
],
"achievements": ["first_trade", "profit_master", "speed_trader"]
}

Financial Markets & Trading

Store complex financial data, trading strategies, market analysis, and portfolio management information with flexible document structures.

Example: Trading strategy configuration
- operation: insert
- document: {
"strategyName": "Momentum Crossover",
"parameters": {
"fastPeriod": 9,
"slowPeriod": 21,
"stopLoss": 0.02,
"takeProfit": 0.05
},
"symbols": ["AAPL", "MSFT", "GOOGL"],
"active": true,
"performance": {"winRate": 0.68, "avgReturn": 0.034}
}
Example: Portfolio holdings
- operation: insert
- document: {
"portfolioId": "growth_portfolio",
"holdings": [
{"symbol": "AAPL", "shares": 150, "avgPrice": 145.20, "currentValue": 22530.00},
{"symbol": "MSFT", "shares": 75, "avgPrice": 280.50, "currentValue": 21037.50}
],
"totalValue": 43567.50,
"lastUpdated": "2025-10-21T10:00:00Z"
}
Example: Market analysis reports
- operation: insert
- document: {
"symbol": "AAPL",
"analysisType": "technical",
"indicators": {
"rsi": 65.4,
"macd": {"signal": 2.15, "histogram": 0.85},
"movingAverages": {"sma20": 152.30, "sma50": 148.75}
},
"recommendation": "BUY",
"confidence": 0.82,
"generatedAt": "2025-10-21T10:00:00Z"
}
Example: Trade execution log
- operation: insert
- document: {
"tradeId": "T20251021001",
"symbol": "AAPL",
"side": "BUY",
"quantity": 100,
"price": 150.25,
"timestamp": "2025-10-21T10:00:00Z",
"strategy": "momentum_crossover",
"fees": 1.50,
"executionDetails": {
"venue": "NASDAQ",
"orderType": "MARKET",
"slippage": 0.05
}
}
Example: Risk management data
- operation: insert
- document: {
"riskProfile": "moderate",
"positionLimits": {
"maxSinglePosition": 0.1,
"maxSectorExposure": 0.25,
"maxDrawdown": 0.15
},
"varCalculation": {
"confidence": 0.95,
"timeHorizon": 1,
"value": 12500.00
},
"lastAssessment": "2025-10-21T10:00:00Z"
}
Example: Economic indicators database
- operation: insert
- document: {
"indicator": "GDP_Growth",
"country": "US",
"frequency": "quarterly",
"values": [
{"date": "2025-Q2", "value": 2.1, "forecast": 2.3},
{"date": "2025-Q3", "value": null, "forecast": 2.0}
],
"source": "Bureau of Economic Analysis",
"lastUpdated": "2025-10-21T10:00:00Z"
}

Getting Started

  1. Add a "MongoDB" worker to your flow
  2. Choose connection mode (internal for quick start, external for custom setup)
  3. Select operation (insert, find, update, delete, count)
  4. Configure your document or query parameters

Example Workflow: User Portfolio Tracker

Create a workflow that:

  1. Fetches user portfolio data from an API
  2. Stores it in MongoDB with user isolation
  3. Updates analytics counters
  4. Sends notifications based on stored preferences

This combination of document storage with workflow automation opens up endless possibilities for data-driven applications.

Performance & Scalability

MongoDB excels at handling large volumes of data with fast read/write operations. The document model scales naturally as your data structure evolves, making it perfect for growing applications.

Start building with MongoDB today and unlock the power of flexible document storage in your workflows!

Redis Key-Value Storage Now Available

· 4 min read

ApudFlow now includes Redis support for high-performance key-value storage. Perfect for caching, session management, and real-time data operations that require speed and reliability.

What is Redis?

Redis is an in-memory data structure store used as a database, cache, and message broker. It provides sub-millisecond response times and supports various data structures like strings, hashes, lists, sets, and more.

Key Features

Lightning Fast Performance

Redis stores data in memory, delivering microsecond response times. Perfect for applications requiring instant data access.

Rich Data Structures

Beyond simple key-value pairs, Redis supports:

  • Strings (text, numbers, binary data)
  • Hashes (field-value objects)
  • Lists (ordered collections)
  • Sets (unique value collections)
  • Sorted sets (ranked collections)

Automatic Expiration

Set TTL (time-to-live) on keys for automatic cleanup. Perfect for temporary data like sessions, cache entries, and rate limiting.

User Isolation

All keys are automatically prefixed with your user ID, ensuring complete data separation and security.

What Can You Use It For?

Caching API Responses

Speed up your workflows by caching expensive API calls, database queries, or computational results.

Example: Cache market data
- operation: set
- key: "market_data_AAPL"
- value: {"price": 150.25, "volume": 4523000}
- ttl: 300 (5 minutes)

Session Management

Store user session data, preferences, and temporary state with automatic expiration.

Example: User session storage
- operation: set
- key: "session_12345"
- value: {"userId": 12345, "lastActivity": "2025-10-21T10:00:00Z"}
- ttl: 3600 (1 hour)

Rate Limiting

Implement rate limiting for API calls, user actions, or workflow executions.

Example: API rate limiting
- operation: incr
- key: "api_calls_user_12345"
- value: 1
- Then check if value exceeds your limit

Real-Time Analytics

Track counters, metrics, and statistics with atomic operations.

Example: Page view tracking
- operation: incr
- key: "page_views_dashboard"
- value: 1

Leaderboards and Rankings

Use sorted sets for gaming leaderboards, popularity rankings, or priority queues.

Example: Trading volume leaderboard
- operation: set (with sorted set operations)
- key: "trading_volume"
- Add scores for different symbols

Message Queues

Implement simple message queues for background processing or event handling.

Example: Notification queue
- operation: set (using lists)
- key: "notification_queue"
- Push messages for later processing

Getting Started

  1. Add a "Redis" worker to your flow
  2. Choose operation (set, get, delete, list, exists, incr, expire)
  3. Configure key, value, and optional parameters like TTL

Example Workflow: Smart Caching System

Create a workflow that:

  1. Checks Redis cache for data
  2. If cache miss, fetches from external API
  3. Stores result in Redis with TTL
  4. Returns cached or fresh data

This pattern dramatically improves performance and reduces API costs.

Performance Benefits

Redis typically delivers:

  • 100x faster than traditional databases for simple operations
  • Sub-millisecond latency for most operations
  • High throughput (100k+ operations per second)
  • Atomic operations ensuring data consistency

Use Cases by Industry

Finance & Trading

  • Real-time price caching
  • Portfolio position storage
  • Rate limiting for trading APIs
  • Session management for trading platforms
  • Order book management
  • Trade execution tracking
  • Risk limit monitoring
  • Market data snapshots
  • Alert system state
  • Trading strategy parameters
  • Historical price buffers
  • Account balance caching
  • Margin requirement tracking
  • Stop-loss order management
  • Arbitrage opportunity detection

Algorithmic Trading Excellence
Redis excels in algorithmic trading due to its lightning-fast access speeds and atomic operations. High-frequency trading strategies require microsecond response times for market data, order execution, and risk management. Redis enables real-time strategy execution, instant position updates, and rapid arbitrage detection across multiple exchanges. Its pub/sub capabilities allow for real-time signal broadcasting to multiple trading algorithms, while sorted sets efficiently manage order books and priority queues for trade execution.

In algorithmic trading, every millisecond counts. Redis's in-memory storage eliminates disk I/O bottlenecks, ensuring strategies can react instantly to market movements. Complex algorithms can maintain state across multiple timeframes, track position sizes with atomic increments, and implement sophisticated risk controls. The ability to expire keys automatically handles time-sensitive data like quotes and orders, while Redis clustering provides the scalability needed for high-volume trading operations. Whether you're running statistical arbitrage, market making, or momentum strategies, Redis provides the performance foundation that separates profitable algorithms from those that lag behind.

E-commerce

  • Shopping cart storage
  • Product catalog caching
  • User session management
  • Inventory counters

Gaming

  • Player statistics
  • Leaderboards
  • Session storage
  • Achievement tracking

Analytics

  • Event counting
  • User behavior tracking
  • A/B test results
  • Real-time dashboards

Redis brings enterprise-grade performance to your workflows. Start using it today for faster, more reliable data operations!

Get your Telegram Chat ID in Apudflow (2 minutes)

· 2 min read

Finding your Telegram Chat ID takes less than 2 minutes. Here's how to do it end-to-end inside Apudflow.

Prerequisites

  • Telegram app installed (mobile or desktop)
  • Our bot available in Telegram: apudflow_bot
  • Apudflow account (to paste the ID into the form)

Step-by-step

  1. Open Telegram, search for “apudflow_bot”.
  2. Open the chat and tap Start (/start).
    • The bot immediately replies with your Chat ID.
  3. Copy the Chat ID the bot returned.
  4. In Apudflow, open the flow step with the “Telegram Notify” worker and paste the number into the “Chat ID” field.
  5. Click run/test — you should receive a message in Telegram.
  • Group chat IDs

    • Add the bot to the group. If you have the webhook enabled, send /id in the group. The bot will reply with the group chat_id (usually a negative number).
  • Channel IDs

    • Make the bot an Administrator of the channel first (required).
    • Public channel: you can use the @channel_username as Chat ID.
    • Private channel: use the numeric id starting with -100… (get it by sending /id after the webhook is active, or with a utility bot like @getidsbot).

That's it! Your Chat ID is now saved in your flow, and you can receive alerts from your automations.

Send Telegram notifications from Apudflow

· 5 min read

You can send messages and images to Telegram directly from your flows. This guide covers setup, formatting, and common pitfalls.

What you can send

  • Text messages
  • Image by URL (photoUrl): text becomes the photo caption
  • Markdown formatting (always on)

Configure the worker

In your flow, add the step “Telegram Notify” and fill these fields:

  • Chat ID: The numeric ID you got from the bot (see the Get Chat ID post) or @channel_username for public channels
  • Text: Message content (or caption when photoUrl is provided)
  • Photo URL (optional): A direct URL to an image
  • Disable link preview: Turn off webpage previews in text messages

Quick example

Result: You’ll receive a photo with the text as its caption.

Financial market use cases and templates

Here are practical ways to use Telegram alerts for trading and portfolio workflows, plus ready-to-copy message ideas.

  • Moving average crossovers

    • Template: ALERT: {TICKER} crossed above 200D MA at ${price} (RVOL {rvol}x)
    • Example: "ALERT: NVDA crossed above 200D MA at $142.50 (RVOL 1.8x)"
  • Breakout signals

    • Template: Breakout: {TICKER} closed above {level} on {tf} with {rvol}x RVOL
    • Example: "Breakout: AAPL closed above $225 on 4H with 2.1x RVOL"
  • RSI divergence signals

    • Template: Signal: {TICKER} RSI {rsi} ({side}) on {tf}; MACD {macd_state}
    • Example: "Signal: NVDA RSI 72 (overbought) on 1H; MACD bullish"
  • Bollinger Band squeeze/expansion

    • Template: BB Squeeze: {TICKER} bandwidth {bw}% — watch for expansion
    • Example: "BB Squeeze: ETH bandwidth 4.2% — watch for expansion"
  • Volume spikes / Relative Volume

    • Template: RVOL spike: {TICKER} {rvol}x vs 20D avg; {volume} shares traded
    • Example: "RVOL spike: AMD 2.7x vs 20D avg; 38M shares traded"
  • Earnings and macro calendar

    • Template: Today: {TICKER} earnings {time} — Cons: Rev {rev_c}, EPS {eps_c}
    • Example: "Today: MSFT earnings 21:30 CET — Cons: Rev $64.5B, EPS $2.71"
  • Portfolio and risk alerts

    • Template: PnL: D {pnl_d}% | MTD {pnl_mtd}% | DD {dd}% | VaR {var}%
    • Example: "PnL: D -1.2% | MTD +3.4% | DD 5.4% | VaR 1.1%"
  • Stop-loss / Take-profit hits

    • Template: Exit: {TICKER} {side} at ${price} — {reason} (SL/TP)
    • Example: "Exit: TSLA long at $239.50 — SL hit"
  • Unusual options activity / IV changes

    • Template: UOA: {TICKER} {flow} — IVR {ivr}, IV {iv}%
    • Example: "UOA: SPY call flow — IVR 78, IV 23%"
  • FX levels and macro indicators

    • Template: FX: {PAIR} tap {level} | {indicator} {value}
    • Example: "FX: EURUSD tap 1.1000 | PMI 52.3"
  • Crypto funding/liquidations/on-chain

    • Template: Crypto: {COIN} funding {funding}% | Liq {liq_usd} | NVT {nvt}
    • Example: "Crypto: BTC funding 0.03% | Liq $120M | NVT 48"
  • Spread/arb setups

    • Template: Spread: {A}/{B} z-score {z} | {tf}
    • Example: "Spread: GLD/GDX z-score 2.1 | D1"
  • End-of-day summary

    • Template: EoD: Top± {top_plus}/{top_minus} | PnL {pnl_d}% | Positions {n}
    • Example: "EoD: Top± NVDA +4.1% / T -3.7% | PnL +0.9% | Positions 12"- Breakouts and S/R touches
    • Template: Breakout: {TICKER} closed above {level} on {tf} with {rvol}x RVOL
    • Example: "Breakout: BTC closed above 65,000 on 4H with 3.1x RVOL"
  • RSI/MACD signals and divergences

    • Template: Signal: {TICKER} RSI {rsi} ({side}) on {tf}; MACD {macd_state}
    • Example: "Signal: NVDA RSI 72 (overbought) on 1H; MACD bullish"
  • Bollinger Band squeeze/expansion

    • Template: BB Squeeze: {TICKER} bandwidth {bw}% — watch for expansion
    • Example: "BB Squeeze: ETH bandwidth 4.2% — watch for expansion"
  • Volume spikes / Relative Volume

    • Template: RVOL spike: {TICKER} {rvol}x vs 20D avg; {volume} shares traded
    • Example: "RVOL spike: AMD 2.7x vs 20D avg; 38M shares traded"
  • Earnings and macro calendar

    • Template: Today: {TICKER} earnings {time} — Cons: Rev {rev_c}, EPS {eps_c}
    • Example: "Today: MSFT earnings 21:30 CET — Cons: Rev $64.5B, EPS $2.71"
  • Portfolio and risk alerts

    • Template: PnL: D {pnl_d}% | MTD {pnl_mtd}% | DD {dd}% | VaR {var}%
    • Example: "PnL: D -1.2% | MTD +3.4% | DD 5.4% | VaR 1.1%"
  • Stop-loss / Take-profit hits

    • Template: Exit: {TICKER} {side} at ${price} — {reason} (SL/TP)
    • Example: "Exit: TSLA long at $239.50 — SL hit"
  • Unusual options activity / IV changes

    • Template: UOA: {TICKER} {flow} — IVR {ivr}, IV {iv}%
    • Example: "UOA: SPY call flow — IVR 78, IV 23%"
  • FX levels and macro indicators

    • Template: FX: {PAIR} tap {level} | {indicator} {value}
    • Example: "FX: EURUSD tap 1.1000 | PMI 52.3"
  • Crypto funding/liquidations/on-chain

    • Template: Crypto: {COIN} funding {funding}% | Liq {liq_usd} | NVT {nvt}
    • Example: "Crypto: BTC funding 0.03% | Liq $120M | NVT 48"
  • Spread/arb setups

    • Template: Spread: {A}/{B} z-score {z} | {tf}
    • Example: "Spread: GLD/GDX z-score 2.1 | D1"
  • End-of-day summary

    • Template: EoD: Top± {top_plus}/{top_minus} | PnL {pnl_d}% | Positions {n}
    • Example: "EoD: Top± NVDA +4.1% / T -3.7% | PnL +0.9% | Positions 12"Tip: Attach a chart image with photoUrl for context (e.g., a MultiChart snapshot). Your text becomes the caption.

Pro tips

  • Store your Chat ID once and reuse it across flows.
  • For groups: Chat ID is negative. If you migrate the group or channel, the ID can change — run /id again.
  • You can send a quick heartbeat by wiring a scheduled flow that posts "ping" to your own DM.

Happy notifying!

multichart-dropdown-descriptions

· One min read

slug: multichart-dropdown-descriptions title: MultiChart — Descriptions in Type Selector authors: system date: 2025-10-17 tags: [charts, ux]

We revamped how you add layers in MultiChart:

  • A single custom dropdown where each option includes a title and a short description (what it draws and how it’s used).
  • Descriptions help you pick the right layer faster without hopping to the docs.

Together with the new top legend and tooltip, configuration and analysis are smoother.

MultiChart for Financial Markets — A Practical Guide

· 9 min read
ApudFlow OS
Platform Updates

This guide shows how to use MultiChart to analyze financial markets faster and with more confidence. It covers what MultiChart can render, proven layer recipes, practical workflows, and why a single, layered view is a real edge.

What is MultiChart?

MultiChart combines multiple visualization layers on a single time axis. You can stack price, indicators, alternative chart types, volume, heatmaps, and order‑flow insights in one view with interactive legend, tooltip, and last‑value chips.

Supported layers (at a glance)

  • Price: Candle, Heikin Ashi, Bar, Equivolume
  • Lines & areas: Line, Tick, Area, Mountain (gradient)
  • Brick‑based: Renko, Range Bar, Point & Figure, Kagi
  • Indicators: Bollinger Bands, Ichimoku (Tenkan/Kijun/Span A/B cloud/Chikou)
  • Volume & order flow: Volume (bottom band), Footprint (candle + buy/sell levels)
  • Events/metrics: Bubble (size/color encodings), Heatmap (time × Y‑category in bottom band)

Tooltip shows context‑aware values (e.g., OHLC, Bollinger M/U/L, Ichimoku T/K/A/B/Ch, Volume). Legend above the chart lets you toggle layers instantly; last‑value chips on the right give a quick read of the latest levels.

Why this is a real advantage

  • Single timeline context: No more tab hopping across tools; compare signals at the exact same moment.
  • Faster synthesis: Tooltip consolidates values across all visible layers under your cursor.
  • Fewer blind spots: Mix complementary views (trend + volatility + volume + structure) to reduce false positives.
  • Multi‑timeframe ready: Layer slow trend context with faster execution signals on the same canvas.
  • Lower cognitive load: Top legend toggles, descriptive layer picker, and last‑value chips keep focus on the price action.
  • Robust data path: Backend normalizes inputs (safe parsing, append mode, rowsExpr), so display stays resilient.

Quick start (3–5 minutes)

  1. Add a MultiChart widget.
  2. Click “Add layer” and select a type (each option has a short description).
  3. Provide layer data (see formats below) and hit Apply.
  4. Repeat for 2–4 layers; keep it lean for clarity.
  5. Use the legend to toggle layers; hover for the tooltip; glance right for last values.

Tip: For large inputs, set a Global limit for smooth performance.

Proven recipes for markets

  • Trend following (clean + context)

    • Candle + Ichimoku (Tenkan/Kijun + cloud) + Volume
    • Read cloud bias and Tenkan/Kijun crosses; confirm with volume expansions.
  • Mean reversion / volatility bands

    • Candle + Bollinger Bands (+ Line for mid)
    • Fade extremes back to middle in ranges; avoid when trend/cloud bias is strong.
  • Noise reduction for structure

    • Renko or Range Bar + Kagi
    • Reveal swing structure and reversals; helpful for stop placement and breakouts.
  • Order flow confidence

    • Candle + Footprint + Volume
    • Watch footprint imbalance near prior highs/lows; confirm with volume spikes on breaks or rejections.
  • Event overlays

    • Candle + Heatmap (categories: events/signals/flows)
    • Correlate price response to discrete events without leaving the chart.
  • Momentum with confirmation

    • Heikin Ashi + Mountain (momentum line) + Bollinger mid
    • Smooth candles, slope + pullback to mid as continuation entries.

Multi‑timeframe layering

  • Use slower layer(s) for bias (e.g., Ichimoku, Mountain/Area of higher timeframe feed).
  • Add a faster execution layer (Tick/Line) to refine entries around key levels.
  • Toggle slow layers off temporarily when managing precise execution.

Interactions that speed you up

  • Top legend: toggle layers instantly, no menus.
  • Tooltip: vertical guide + consolidated lines per layer under your cursor.
  • Last‑value chips: right‑side labels with connectors; see the latest level per layer at a glance.

Data formats (summary)

  • Candle/Heikin/Bar/Equivolume: { ts, o, h, l, c, (volume) }
  • Line/Tick/Area/Mountain: { ts, value }
  • Renko/Range Bar/Point & Figure: { ts, o, c, up }
  • Bollinger: { ts, middle, upper, lower }
  • Kagi: { ts, value, thick, up }
  • Volume: { ts, volume }
  • Footprint: { ts, o, h, l, c, (volume), (levels:[{ price, buy, sell }]) }
  • Bubble: { ts, value, size, (colorValue) }
  • Ichimoku: { ts, value } for each of: tenkan/kijun/spanA/spanB/chikou
  • Heatmap: { ts, y, value } (Y = category label)

Notes:

  • ts in seconds (Unix). Heatmap/Volume draw in bottom bands and don’t affect price scale.
  • Ichimoku lines are considered in price range calculation; Heatmap is excluded.

Performance tips

  • Keep 3–6 layers visible; hide diagnostic layers via legend as needed.
  • Use Global limit or pre‑downsample on the backend for very large histories.
  • Prefer Area/Mountain/Line over highly granular Tick when you only need trend context.

Risk & usage notes

  • This is an analysis tool, not investment advice.
  • Validate strategies out‑of‑sample; combine price, volume, and structure for robustness.
  • Be mindful of liquidity and execution costs when translating signals to orders.

Next steps

  • Add Y‑axis labels for Heatmap categories (coming).
  • Per‑component toggles for Ichimoku in the configurator.
  • Persist layer visibility state across sessions.

With MultiChart, you can compress the analysis loop: one chart, multiple truths. Less context switching, more time spent on decisions that matter.

Examples: market playbooks

Below are practical, step‑by‑step setups you can reproduce quickly.

1) Intraday breakout with volume confirmation

  • Layers: Candle + Volume + Footprint + Heatmap (events/news/sessions)
  • Read:
    • Mark prior session high/low or pre‑market extremes (Heatmap category = level).
    • Watch for price consolidation under the level; then break with a volume spike (Volume) and buy‑side imbalance on Footprint.
    • Tooltip confirms the candle OHLC and footprint context exactly at break.
  • Execution:
    • Entry on break/close above level with footprint buy > sell.
    • Invalidation if price re‑enters range and footprint flips (sell imbalance) on pullback.
    • Take partials into the next HTF level; trail using prior 1–2 candle lows.

2) Swing trend following with Ichimoku

  • Layers: Heikin Ashi (or Candle) + Ichimoku + Volume
  • Read:
    • Cloud color/position for bias; Tenkan crossing above Kijun for momentum.
    • Chikou above price confirms trend maturity; Volume upticks on legs.
  • Execution:
    • Enter on pullback to Kijun with Tenkan rising and cloud supportive.
    • Exit/trail when Tenkan crosses down or price closes inside/opposite the cloud.

3) Range mean reversion with Bollinger

  • Layers: Candle + Bollinger Bands (±2σ) + Volume (filter)
  • Read:
    • In sideways regimes, tests of Upper/Lower bands tend to revert to the Middle.
    • Avoid when Volume expands in the direction of the move (possible regime shift).
  • Execution:
    • Fade near band extremes back to the middle; stop outside the band; reduce when volume expands.

4) Multi‑timeframe overlay

  • Layers: Mountain (HTF feed; e.g., 4H) + Candle (LTF; e.g., 15m) + Bollinger mid
  • Read:
    • Use Mountain slope as higher‑timeframe bias; focus on LTF continuation in that direction.
    • Bollinger mid on LTF serves as a dynamic pullback reference.
  • Execution:
    • Enter on LTF pullback to the mid in the direction of HTF slope; exit on mid loss or slope flattening.

5) Structure & stops with Renko/Kagi

  • Layers: Renko (or Range Bar) + Kagi
  • Read:
    • Clean swing structure and reversal points; Kagi thickness highlights trend vigor.
  • Execution:
    • Place stops beyond the prior brick swing; enter on Kagi thickening in trend direction.

rowsExpr examples (Eval, no JSON)

Prefer passing expressions that evaluate to a list of dictionaries. You can reference two built‑in scopes: vars and data. If your platform supports mustache, you can wrap the expression, e.g. {{ ... }}.

Tip: Ensure timestamps are Unix seconds. If your source is in milliseconds, convert with r['ts'] // 1000 in a list comprehension.

Candle (from Yahoo feed in vars)

type: Candle
rowsExpr: "{{ vars['fetch_yahoo_n_v7is']['data'] }}"
open: "o"
close: "c"
high: "h"
low: "l"
timestamp: "ts"
limit: 500

If your ts is in milliseconds:

rowsExpr: "[{'ts': r['ts']//1000, 'o': r['o'], 'h': r['h'], 'l': r['l'], 'c': r['c']} for r in vars['fetch_yahoo_n_v7is']['data']]"

Line (close as a single value series)

type: Line
rowsExpr: "[{'ts': r['ts'], 'value': r['c']} for r in vars['fetch_yahoo_n_v7is']['data']]"
timestamp: "ts"
fields: "value"
shade: false

Volume (bottom band)

type: Volume
rowsExpr: "[{'ts': r['ts'], 'volume': r.get('volume') or r.get('v') or 0} for r in vars['fetch_yahoo_n_v7is']['data']]"
timestamp: "ts"

Ichimoku (computed server‑side from OHLC)

Use the same OHLC rows; the worker computes Tenkan/Kijun/Span A/B/Chikou.

type: Ichimoku
rowsExpr: "{{ vars['fetch_yahoo_n_v7is']['data'] }}"
open: "o"
close: "c"
high: "h"
low: "l"
timestamp: "ts"

Bollinger Bands (precomputed in vars)

If you already have bands in vars['bb_20_2'] with middle/upper/lower:

type: Bollinger
rowsExpr: "{{ vars['bb_20_2'] }}"
timestamp: "ts"
middle: "middle"
upper: "upper"
lower: "lower"

Or derive from closes on the fly (simple example, pseudo‑rolling):

rowsExpr: "(lambda rows: [
{'ts': rows[i]['ts'], 'middle': sum([r['c'] for r in rows[i-19:i+1]])/20,
'upper': (sum([r['c'] for r in rows[i-19:i+1]])/20) + 2*(((sum([(r['c']- (sum([q['c'] for q in rows[i-19:i+1]])/20))**2 for r in rows[i-19:i+1]])/20))**0.5),
'lower': (sum([r['c'] for r in rows[i-19:i+1]])/20) - 2*(((sum([(r['c']- (sum([q['c'] for q in rows[i-19:i+1]])/20))**2 for r in rows[i-19:i+1]])/20))**0.5)
} for i in range(19, len(rows))
])(vars['fetch_yahoo_n_v7is']['data'])"

Heatmap (events → categories)

Map heterogeneous event arrays into a unified shape { ts, y, value }:

type: Heatmap
rowsExpr: "
([{'ts': e['ts'], 'y': 'earnings', 'value': 1} for e in vars.get('earnings', [])] +
[{'ts': n['ts'], 'y': 'news', 'value': 1} for n in vars.get('news', [])] +
[{'ts': s['ts'], 'y': 'session-open', 'value': 1} for s in vars.get('sessions', [])])
"
timestamp: "ts"
category: "y"
value: "value"

Footprint (order flow with price levels)

If your order‑flow provider is stored under vars['oflow']:

type: Footprint
rowsExpr: "{{ vars['oflow'] }}"
open: "o"
close: "c"
high: "h"
low: "l"
timestamp: "ts"
levels: "levels" # array of { price, buy, sell }

Notes

  • You can pass a Python list directly as rowsExpr (no mustache needed).
  • safe_eval sees: vars, data, and a small set of built‑ins (sum, len, min, max, sorted, list comprehensions, etc.).
  • If a layer expects OHLC, provide field names via mapping options (open/close/high/low/timestamp).

Checklist before trading

  • Data sanity: timestamps increasing, fields present (e.g., o/h/l/c, value, volume).
  • Visibility: keep 3–6 layers; hide diagnostics via legend on demand.
  • Regime: trend vs. range — pick the recipe that matches the regime.
  • Confirmation: volume or footprint to back price signals.
  • Risk: define invalidation (where the idea is clearly wrong), not just target.

multichart-legend-top-tooltip

· One min read

slug: multichart-legend-top-tooltip title: MultiChart — Top Legend and Tooltip authors: system date: 2025-10-14 tags: [charts, ux]

Two UX improvements that make MultiChart easier to work with:

  • Legend moved above the chart — quick layer visibility toggles in a single row. The chart reserves top space so nothing overlaps.
  • Tooltip on hover — vertical guide line and a box with the time and values for all visible layers at that moment (OHLC, Bollinger M/U/L, Ichimoku T/K/A/B/Ch, Volume, etc.).

This speeds up analysis and cross‑layer comparisons.

multichart-ichimoku-heatmap

· One min read

slug: multichart-ichimoku-heatmap title: MultiChart — Ichimoku and Heatmap authors: system date: 2025-10-12 tags: [charts, indicators]

We added two powerful visualizations to MultiChart:

  • Ichimoku: Tenkan, Kijun, Span A/B (with cloud), Chikou. Default colors and readable cloud opacity. Tooltip shows T/K/A/B/Ch at the hovered time.
  • Heatmap: A band at the bottom (time × Y category). Color from a palette (inferno by default; also viridis, magma, plasma). Does not affect the price scale.

We also polished backend computations (series normalization, safe parsing of values and timestamps).

Planned: Heatmap Y‑axis labels and Ichimoku component toggles in the configurator.

multichart-new-layers

· One min read

slug: multichart-new-layers title: MultiChart — New Layer Types authors: system date: 2025-10-09 tags: [charts, release, analytics]

We’re shipping a bigger batch of new layer types for MultiChart. This lets you combine more analytical approaches on a single time axis.

What’s new:

  • Price: Heikin Ashi, Bar, Equivolume, Area, Mountain (Area variant with a soft gradient)
  • Advanced: Renko, Range Bar, Point & Figure, Kagi, Tick
  • Indicators: Bollinger Bands
  • Other: Bubble (radius from size, optional color from colorValue), Footprint (candle outline with per-level buy/sell)

Tips:

  • Start with 3–5 layers for clarity (e.g., Candle + Bollinger + Volume).
  • Use the global limit if you process larger datasets.

Docs: guides will be updated shortly; in the meantime, see the in‑app layer descriptions and on‑chart tooltips.