Antaris Analytics
Products Benchmarks Roadmap Privacy

"We didn't study the literature and implement papers. We had a problem — agent memory sucked — and we solved it from first principles. Then we benchmarked it and found out we were competitive with systems way more complex."

— Antaris Analytics

Agent and Agent Infrastructure

Partnerships

Powered by Antaris Core

ForgeAI ForgeAI

We partnered with WealthHealth AI to build Forge, a personal AI that lives on your Mac or PC and takes away the complexity of agent setup. Unlike browser-based AI tools that forget you the moment you close the tab, Forge remembers your conversations, your preferences, and your context over time, all powered by the Antaris Analytics suite of tools. It gets to know users the way a real assistant would.

Visit www.forgeAI.bot →
Antaris Coming Soon

Enhanced AI Agent with persistent memory, intelligent routing, and enterprise-grade security with multiple security levels: all built on Antaris Core. MCP Client enabled, export wizard command line to scan your current agent and copy whatever features you’d like. Automatic JS to .py conversion for non-MCP tools. Cross-session context sharing lets multiple agents share memories in real time. Zero external dependencies, fully self-hosted, air-gappable.

Parsica-Memory
v3.0.0
File-based persistent memory for AI agents. 12-layer BM25+ search engine with LLM enrichment - generates search_queries, enriched_summary, and tags at write time for richer recall. Tiered hot/warm/cold storage with adaptive decay. Ingest quality gates reject noise before it enters the store. Full provenance on every memory: source, timestamp, session ID, confidence score. Works offline, zero cloud, zero API keys, zero external dependencies.

✦ NEW: Cross-session shared memory pool - multiple agents, one store
✦ NEW: Real-time context sharing across sessions
antaris-router
v5.2.1
Intelligent model routing that learns from outcomes. Routes each query to the optimal provider based on cost, latency, confidence, and live SLA health. Supports A/B testing across models, fallback chains when providers degrade, confidence gating to reject low-quality responses, and per-model cost tracking with budget caps. Adapts routing weights over time as outcome quality is observed - it gets smarter the more you use it.

✦ Self-improving: routing weights adapt based on response quality
✦ Live provider health monitoring with automatic failover
✦ Budget caps and per-model cost tracking built in
antaris-guard
v5.2.1
Deterministic, zero-ML safety layer for AI agents. Catches prompt injection, jailbreak attempts, PII leakage (SSN, credit cards, emails, phone numbers), hate speech, and behavioral drift - all without an API call. Five security levels from open to locked-down, configurable per deployment. Rate limiting with per-user and global caps. Audit logging on every decision. MCP-compatible, drop-in middleware.

✦ Verified TPR 100%: no unsafe message passes through
✦ Verified FPR 0%: no safe message blocked
✦ Five security levels - from permissive to air-gapped strict
antaris-context
v5.2.1
Precision context window management for token-limited models. Builds the prompt within a strict token budget using relevance scoring - most important content in, least important out. Overflow cascade trims memories, pitfalls, and history in priority order until the budget fits. Supports sliding window, summarization, and hard-truncation strategies. Guarantees the assembled prompt never exceeds the model's limit.

✦ NEW: Hard budget enforcement across all content types
✦ NEW: Budget-aware trim: memories, pitfalls, and history all managed
✦ Relevance scoring ensures highest-signal content always fits
antaris-pipeline
v5.2.1
Orchestrates the complete agent turn in a single call. Runs guard → memory recall → context assembly → model routing → output guard → memory ingest in sequence, with each stage isolated and independently configurable. LLM enrichment hook fires at ingest time, generating search_queries and enriched_summary automatically. Built-in compaction support: flushes durable memories before context window resets. Handles errors per-stage so one failure never silently corrupts the turn.

✦ One call: pre_turn() + post_turn() handles the full agent lifecycle
✦ Pre-compaction flush: durable memories saved before context resets
✦ Per-stage error isolation - no silent failures

Data Flow

Every agent turn, pre_turn() before LLM · post_turn() after

Input Guard Memory Context Router LLM
↑                                              ↓
Pipeline ─────────────────────── orchestrates ──────────────────────── Memory ← ingest

Guard blocks unsafe input → Memory recalls relevant context → Context assembles the prompt → Router picks the model → LLM runs → Memory ingests the response

Parsica Memory Benchmarks

3.0.0 Benchmarks and Open Source Release Coming Soon

Roadmap →

New Roadmap, Rebrand, & New Products Coming Soon.

View the full roadmap →

Products

Parsica

Live Demo

Zero-dependency document search with full provenance. BM25+ semantic search across any corpus. Every result returns the exact source.

Parsica-Memory

v5.2.1

File-based persistent memory for AI agents. 12-layer BM25+ search, LLM enrichment, cross-session context sharing. Zero external dependencies.

Antaris

Coming Soon

Enhanced AI Agent with Parsica-Memory, intelligent routing, enterprise-grade security. MCP Client enabled, zero dependencies out of the box.

Antaris Core

v5.2.1

Complete Antaris AI agent infrastructure bundle: memory, routing, security, context, and pipeline orchestration. Zero external dependencies.

Inventaris

Coming Soon

AI-powered inventory and analytics. More details coming soon.