🐱

Catyro

Home Products Solutions Eval Dashboard

Intelligent Data Processing Platform

Catyro leverages AI agents to automate complex data ingestion, parsing, mapping, and review workflows β€” turning unstructured documents into structured, actionable data.

πŸ€–

AI-Powered Agents

Multi-agent pipeline with file detection, parsing, and intelligent schema mapping.

πŸ“Š

Smart Field Mapping

Automatically maps source columns to target schemas with confidence scoring.

βœ…

Human-in-the-Loop

Review, correct, and confirm AI-processed data before final ingestion.

πŸ”¬

Agentic AI Evaluation

Three-layer evaluation framework for model quality, retrieval accuracy, and agent trajectory analysis.

Products

πŸ”¬ Agentic AI Evaluator

Our Clients

🏒 Insurance Brokerage πŸ’³ Credit Card Company

Products

Production-grade tools for building, evaluating, and deploying AI systems with confidence.

πŸ”¬

Agentic AI Evaluator

Three-layer evaluation framework for agentic AI systems

In Development

A comprehensive evaluation platform that scores agentic AI systems across model output quality, retrieval accuracy, and multi-step trajectory correctness. Designed to run before every deploy β€” catching regressions in reasoning, retrieval, and routing before they reach production.

Load your agent topology β€” models, tools, retrieval stores, routing logic β€” and get a structured evaluation report with per-layer scoring, failure analysis, and regression tracking.

Layer 1

Model Evaluation

Evaluate the raw quality of LLM outputs independent of the surrounding system. This layer answers: "Is the model producing correct, grounded, faithful answers?"

🎯
Output Quality

Correctness, completeness, and relevance of generated responses against ground-truth or rubric-based evaluation.

πŸ‘»
Hallucination Detection

Identifies fabricated facts, unsupported claims, and confident-sounding but incorrect statements.

πŸ“Ž
Faithfulness

Measures whether the model's output is grounded in the provided context β€” no extrapolation beyond source material.

🧩
Consistency

Detects contradictions within outputs and across repeated runs of the same prompt.

Layer 2

Retrieval Evaluation

Measured independently from model quality because the model's ceiling is the retrieval floor. If retrieval is broken, no model can compensate β€” this layer isolates retrieval performance.

πŸ”
Precision@K

Of the top K retrieved chunks, what fraction are actually relevant? Catches noisy retrieval that floods the context window.

πŸ“‘
Recall

Of all relevant chunks in the corpus, what fraction were retrieved? Catches retrieval that misses critical information.

πŸ“¦
Chunk Relevance

Per-chunk relevance scoring β€” identifies which retrieved passages actually contributed useful context vs. noise.

🧱
Retrieval–Model Boundary

Separates retrieval failures from model failures. When the answer is wrong, was it because of bad retrieval or bad reasoning?

Layer 3

Trajectory Evaluation

For multi-step agents, the final answer is the wrong unit of analysis. This layer evaluates the entire sequence β€” which tools were called, whether routing decisions were correct, and whether the agent reached a valid terminal state efficiently.

πŸ—ΊοΈ
Step Sequence Analysis

Evaluates the full trajectory of agent actions β€” was each step logically justified? Were unnecessary steps taken?

πŸ”§
Tool Selection Correctness

For each decision point, did the agent invoke the right tool with correct parameters? Catches tool misuse and hallucinated tool calls.

πŸ”€
Routing Accuracy

When the agent needed to choose between sub-agents, APIs, or branches β€” did it route correctly? Measures decision-point accuracy.

🏁
Terminal State Validity

Did the agent reach a valid terminal state? Did it terminate efficiently without loops, dead ends, or unnecessary retries?

🧬 Topology Loading & Reporting

Load Your Agent Topology

Define your agents, models, tools, retrieval stores, and routing logic. The evaluator maps your system architecture and targets each component for layer-appropriate evaluation.

AgentsModelsToolsRetrieversRoutersChains
Pre-Deploy Evaluation Gate

Run the full three-layer evaluation as part of your CI/CD pipeline. Block deploys that regress on any layer. Track scores across versions.

CI/CD IntegrationRegression DetectionVersion TrackingDeploy Gate
Scoring & Reporting

Per-layer scores, per-component breakdowns, failure case analysis, and trend tracking across evaluation runs.

Layer ScoresComponent DrilldownFailure AnalysisTrend Charts

Solutions

Tailored automation solutions for each of our clients, built on the Catyro platform.

🏒

Insurance Brokerage

Insurance brokerage data automation solutions

Active

Automated Claim Processing

End-to-end automation for insurance claim document ingestion. Emails with claim attachments are parsed, mapped to the target schema using AI agents, reviewed by a human, and ingested into the database β€” reducing manual processing time by 90%.

Outlook Add-in AI Schema Mapping Human Review Auto-Ingestion
Amazon Bedrock Claude AI AWS Lambda DynamoDB
πŸ’³

Credit Card Company

Financial data processing solutions

Coming Soon

Solutions in Development

Exciting automation solutions are being developed for Credit Card Company. Stay tuned for updates.

Evaluation Dashboard

View evaluation reports from the Catyro Agentic AI Evaluator. Paste a report JSON or load one from the API.

πŸ“„ Load Evaluation Report

Layer Scores

Metric Breakdown

Trace Results

⚠️ Issues & Failures

Loading...

⚠️ Error

πŸ€– AI Processing Pipeline

πŸ“Š Field Mapping Summary

βœ… Auto-Mapped Fields β‰₯ 90% Confidence

⚠️ Fields Needing Review < 90% Confidence

These fields could not be mapped with high confidence. Please select the correct target field from the dropdown, or leave as "unmapped".

πŸ“‹ Raw Parsed Data

βœ… Data Ingested Successfully into Catyro

The reviewed and mapped data has been saved to the database.

You can close this page now.

Data Rejected

The parsed data has been marked as rejected and will not be ingested.