Skip to main content

Architecture Overview

Eventflux is designed as a modular, high-performance streaming engine with clear separation of concerns between parsing, planning, and execution.

System Architecture

┌─────────────────────────────────────────────────────────────────────┐
│ EventFlux Engine │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ SQL Compiler Layer │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │ │
│ │ │ Parser │─▶│ Analyzer │─▶│ Query Planner │ │ │
│ │ │ (sqlparser) │ │ (Semantic) │ │ (Optimization) │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────────────┘ │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ Execution Runtime Layer │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │ Source │─▶│ Processor│─▶│ Window │─▶│ Sink │ │ │
│ │ │ Handler │ │ Chain │ │ Manager │ │ Handler │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ State Management Layer │ │
│ │ ┌────────────┐ ┌─────────────┐ ┌───────────────────┐ │ │
│ │ │ StateHolder│ │ Checkpoint │ │ Storage Backend │ │ │
│ │ │ (Memory) │ │ Manager │ │ (Redis/Local) │ │ │
│ │ └────────────┘ └─────────────┘ └───────────────────┘ │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘

Core Components

1. SQL Compiler

The SQL compiler transforms EventFlux queries into an optimized execution plan.

Parser (sql_compiler)

  • Built on a vendored fork of datafusion-sqlparser-rs
  • Extended with EventFluxDialect for streaming constructs
  • Native support for WINDOW, MATCH, WITHIN clauses

Semantic Analyzer

  • Type checking and validation
  • Stream/table schema resolution
  • Expression type inference

Query Planner

  • Logical plan generation
  • Query optimization
  • Physical plan creation

2. Query API

The Query API defines the Abstract Syntax Tree (AST) structures:

// Core query structures
pub struct Query {
pub input: InputSource,
pub selector: Selector,
pub output: OutputTarget,
pub window: Option<WindowSpec>,
pub pattern: Option<PatternSpec>,
}

// Stream definition
pub struct StreamDefinition {
pub name: String,
pub attributes: Vec<Attribute>,
}

// Window specification
pub enum WindowSpec {
Tumbling(Duration),
Sliding(Duration, Duration),
Session(Duration),
Length(usize),
// ... 9 window types
}

3. Execution Runtime

The execution runtime processes events through a pipeline of processors:

pub trait Processor: Send + Sync {
/// Process an incoming event
fn process(&mut self, event: StreamEvent) -> Vec<StreamEvent>;

/// Process timer-based triggers
fn process_timer(&mut self, timestamp: i64) -> Vec<StreamEvent>;

/// Get current processor state
fn get_state(&self) -> ProcessorState;
}

Processor Types:

ProcessorPurpose
FilterProcessorWHERE clause filtering
ProjectProcessorSELECT expression evaluation
WindowProcessorWindow aggregation (9 types)
JoinProcessorStream/table joins
PatternProcessorComplex event detection
GroupByProcessorGROUP BY aggregation

High-Performance Pipeline

Eventflux achieves high throughput through several optimizations:

Lock-Free Data Structures

use crossbeam::queue::ArrayQueue;

pub struct EventPipeline {
/// Lock-free bounded queue for event passing
queue: ArrayQueue<StreamEvent>,
/// Pipeline capacity
capacity: usize,
}

impl EventPipeline {
pub fn push(&self, event: StreamEvent) -> Result<(), PushError> {
self.queue.push(event).map_err(|_| PushError::Full)
}

pub fn pop(&self) -> Option<StreamEvent> {
self.queue.pop()
}
}

Object Pooling

Pre-allocated event objects minimize allocation overhead:

pub struct EventPool {
pool: ArrayQueue<StreamEvent>,
capacity: usize,
}

impl EventPool {
pub fn acquire(&self) -> StreamEvent {
self.pool.pop().unwrap_or_else(|| StreamEvent::new())
}

pub fn release(&self, event: StreamEvent) {
// Return to pool if not full
let _ = self.pool.push(event);
}
}

Backpressure Handling

Configurable strategies for handling backpressure:

StrategyBehaviorUse Case
BlockBlock producer until space availableGuaranteed delivery
DropDrop oldest events when fullLatest data priority
DropNewestReject new events when fullHistorical data priority
UnboundedGrow queue without limitTesting only

Window Processing

Windows are central to stream processing. Eventflux implements 9 window types:

pub enum WindowType {
/// Fixed, non-overlapping time windows
Tumbling(Duration),
/// Overlapping time windows
Sliding { size: Duration, slide: Duration },
/// Gap-based session windows
Session(Duration),
/// Count-based sliding window
Length(usize),
/// Count-based batch window
LengthBatch(usize),
/// Continuous time window
Time(Duration),
/// Time-based batch window
TimeBatch(Duration),
/// Event-time based window
ExternalTime { timestamp_attr: String, duration: Duration },
/// Delayed emission window
Delay(Duration),
}

Pattern Matching Engine

The pattern matching engine detects complex event sequences:

pub struct PatternMatcher {
/// Pattern definition
pattern: Pattern,
/// Active partial matches
active_matches: Vec<PartialMatch>,
/// Time constraint
within: Duration,
}

impl PatternMatcher {
pub fn process(&mut self, event: StreamEvent) -> Vec<PatternMatch> {
// 1. Try to extend existing partial matches
// 2. Start new partial matches if event matches first element
// 3. Clean up expired partial matches
// 4. Return completed matches
}
}

Memory Model

Eventflux uses Rust's ownership model to ensure memory safety without garbage collection:

  • Zero-copy: Events are passed by reference where possible
  • Arc sharing: Shared state uses Arc<T> for safe concurrent access
  • Explicit lifetimes: Compile-time memory safety guarantees
  • No GC pauses: Predictable latency without garbage collection

Performance Characteristics

MetricTargetStatus
Throughput>1M events/secTarget
Latency (p99)<1msTarget
Memory overheadEfficientTested
GC pausesNoneGuaranteed

Next Steps