Meet Aurora

Finding the needle
in the haystack faster

For more than twenty years, anti-money laundering has been the same puzzle. Billions of transactions move through financial systems every day, and only a tiny fraction of them are tied to actual financial crime. Finding those cases is a needle-in-a-haystack problem, and missing even one is out of the question.

Rule-based systems narrowed the search.
Machine learning shrank the haystack.
Neither changed the underlying game. Agentic AI does.

AML Unified Rule Orchestration and Risk Agent

Aurora is a modular, agent-based suite built by Consortix on SAS Viya. It extends existing AML systems rather than replacing them, and it tackles the slowest, most manual parts of the scenario lifecycle while keeping every step transparent, documented, and auditable.

Watch Aurora in action
Download the Aurora Whitepaper

Discover the capabilities and benefits in detail.

Find out more about Aurora

Read more about Aurora on our blog.

Are you ready for agentic AI?
Fill in the readiness questionnaire

Find out how quickly you organization could integrate agentic AI.

AI-readiness
questionnaire

On average, how long does it take to move a new detection scenario from concept to production?
How many active detection scenarios are running in your system today?
How many AML alerts does your system generate per day on average?
How many AML investigators do you have?
On average, how long does it take to investigate one alert?
What is the measured false positive rate on an average scenario of yours?
Do you have a labelled alert or SAR history that could be used for training?
Where does your AML team stand today on AI and ML usage?
Do you have a formal model risk management process, and does it cover AI models?
How much internal capacity does your organization have for AML scenario development and maintenance?
Which agentic AI use case would deliver the most value for your organization right now?
What is the most important outcome you expect?

Marketing Permissions

Consortix will use the information you provide on this form to be in touch with you and to provide updates and offers.

Data Privacy

For more information about our privacy practices please read our Privacy Notice. By clicking below, you agree that we may process your information in accordance with these terms.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

From requirement
to go-live with agentic AI

Scenario development traditionally runs through business requirements, IT specification, implementation, testing, tuning, approval, and go-live. Weeks of calendar time, many handovers, and plenty of room for things to get lost between business and IT.

With Aurora, those same steps collapse into one directed workflow. The expert writes the requirement in natural language. The agents take over from there, turning it into a formal rule, testing it, validating compliance, and loading the result into production. What used to take weeks now takes days, and the scenario that comes out is documented, traceable, and always editable by the team responsible for it, so final control stays with the people who own the risk.

Multiple agents with specific roles

The pipeline is modular: each agent performs one auditable task and hands a structured artifact to the next.

Rule Creation

Translates the confirmed description into formal,system-readable rule logic. The output is technically implementable, fully documented, and structuredfor audit.

Completeness Checker

Reads the natural-language description and verifiesthat all the information required to build the ruleis present. If something is missing or ambiguous,it asks targeted questions and records the clarifiedinput.

Governance Agent

Checks whether the new rule overlapswith existing scenarios and verifies that therule contains no discriminatory or legallyimpermissible filtering conditions. Regulatoryand ethical compliance is enforced at the point ofcreation.

Unit Tester

Generates test cases automatically, runs themagainst the proposed rule, and produces a pass/fail report. Where test results suggest refinement, itfeeds observations back into the logic.

Recalibrator

Updates the AI models already in production withfresh data produced by the new rule, keeping thebroader detection system aligned with the latestlogic.

Aurora is a winning project

At the 2025 SAS Hackathon, a global competition with over 2,000 participants from 66 countries, the Consortix-AURORA team won three categories: Banking, Agentic AI/Decisioning, and Trustworthy AI. The Banking win recognized Aurora's approach to AML scenario development. The other two awards reflected that automation speed and regulatory governance can coexist in the same solution. The overall champion will be announced at SAS Innovate 2026 in April, where Consortix is among the contenders.