Skip to main content

Event Flow

This page explains how Anava processes camera events, from initial trigger through AI analysis to final actions. Understanding this flow helps you configure effective Detections and troubleshoot issues.

The Pipeline

Anava follows a consistent pipeline for every event:

Event processing pipeline from trigger to actions

Stage 1: Trigger

Everything starts with a camera event that triggers analysis.

Trigger Types

TypeSourceUse Case
MotionCamera's built-in motion detectionGeneral-purpose, always available
Object AnalyticsAXIS Object Analytics (AOAS)Pre-filtered by on-camera AI (person, vehicle, etc.)
Digital InputPhysical I/O port signalDoor sensors, PIR detectors, alarm panels
Manual TriggerVirtual input from VMS or APIOperator-initiated analysis
PerimeterAOAS fence/tripwire scenariosLine crossing, zone intrusion
ScheduleTime-based pollingActive monitoring without triggers

How Triggers Work

  1. Camera's analytics detect an event (motion, object, I/O change)
  2. Camera sends ONVIF event notification to the Anava ACAP
  3. ACAP matches the event to active profiles
  4. If a profile matches, ACAP captures a frame and sends to cloud
Trigger Selection

Use Object Analytics when available - it pre-filters at the camera, reducing cloud costs and false positives. Fall back to Motion for cameras without AOAS.

Stage 2: Frame Capture

When a trigger fires, the ACAP captures a snapshot for analysis.

Capture Configuration

SettingOptionsImpact
View Area1-8Which camera stream to capture
ResolutionTINY to ULTRAImage quality vs bandwidth

Resolution Profiles

ProfileResolutionUse Case
TINY (128p)228x128Minimal bandwidth, basic detection
LOW (256p)455x256Low bandwidth, overview detection
BALANCED (360p)640x360Recommended for most use cases
HIGH (480p)854x480Detailed analysis needed
HD_720 (720p)1280x720High-detail detection
FULL_HD (1080p)1920x1080Maximum detail

Stage 3: Pre-filter (Optional)

The pre-filter stage provides fast, low-cost screening before full analysis.

Pre-filter decision flow for event analysis

Pre-filter Configuration

In your skill's Analysis Configuration:

  • Pre-filter Criteria: What makes an image worth analyzing
  • Pre-filter Prompt: Instructions for the fast check

Example pre-filter:

Criteria: "Human presence in frame"
Prompt: "Is there a person visible in this image? Answer yes or no."

When to Use Pre-filter

ScenarioPre-filterReason
High-traffic areaYesFilter out irrelevant motion
Secure area (low traffic)NoEvery trigger is important
Motion triggerYesMany false triggers
Object Analytics triggerMaybeAlready pre-filtered by camera

Stage 4: Full Analysis

The full analysis stage uses Gemini's multimodal AI to understand the scene.

What Happens

  1. Frame and skill prompts are sent to Gemini
  2. AI analyzes the image based on your configuration
  3. Structured response is generated with:
    • Object detections (what's in the scene)
    • Question answers (structured data)
    • Confidence scores
    • Recommended actions

Analysis Configuration

ComponentPurposeExample
System PromptSets AI context and behavior"You are a security analyst..."
User PromptSpecific analysis instructions"Identify any unauthorized persons..."
ObjectsItems to detect"Person", "Weapon", "Vehicle"
QuestionsStructured outputs"Is the area clear? (yes/no)"

Temporal Analysis

For Active Monitoring profiles, multiple frames are analyzed over time:

Temporal analysis sequence for multi-frame detections

This enables detections like "person loitering" or "crowd forming".

Stage 5: Actions

Based on analysis results, Anava executes configured actions.

ONVIF Events

For VMS integration, Anava emits ONVIF events:

ONVIF event lifecycle from detection to VMS action

Event TypeBehaviorVMS Compatibility
Boolean (Stateful)True when detected, False when clearedMilestone, Genetec, ACS
PulseSingle trigger per detectionAll VMS systems

TTS (Text-to-Speech)

When configured, Anava can speak through the camera:

  • Automatic: Based on detection results
  • Custom Message: Dynamic response based on scene
  • Voice Options: Multiple voices and styles

Notifications

  • Push Notifications: Mobile alerts
  • Email: Detailed event reports
  • Webhooks: Custom integrations

Session Lifecycle

Every analysis creates a Session in Anava:

Session lifecycle for event analysis

Session Data

Each session contains:

  • Captured frame(s)
  • Analysis results
  • Objects detected
  • Question answers
  • ONVIF events emitted
  • TTS playback status
  • Timing information

Timing Expectations

StageTypical Duration
Trigger to Capture< 100ms
Frame Upload200-500ms
Pre-filter500-1000ms
Full Analysis1-3 seconds
Action Execution< 500ms
Total2-5 seconds
Performance Factors

Actual timing depends on network latency, image size, analysis complexity, and current system load. Active Monitoring sessions take longer due to multiple frames.

Best Practices

Reduce Latency

  1. Use Object Analytics triggers to pre-filter at camera
  2. Set appropriate resolution (BALANCED for most cases)
  3. Enable pre-filter for high-traffic areas
  4. Use specific, focused prompts

Reduce False Positives

  1. Configure pre-filter criteria carefully
  2. Use Perimeter/Object triggers instead of Motion
  3. Add contextual prompts ("authorized personnel wear badges")
  4. Set appropriate confidence thresholds

Optimize Costs

  1. Pre-filter rejects don't incur full analysis costs
  2. Lower resolution reduces processing time
  3. Specific triggers reduce unnecessary analyses
  4. Review sessions to identify patterns