NEWRead our new paper

Discover the mathematical model of your software.

SpecLogician helps developers and LLM coding agents discover behavioral structure from requirements, source code, scenarios, tests, logs, and operational traces — then turn it into a shared semantic model for reasoning, verification, semantic observability, and test generation.

Start now
Free trial availableRequirements · source code · tests · logs unifiedSemantic observability included
Start now

How to Get Started

Go from sign-up to discovering the behavioral model of your software in four steps.

1Sign up and get an API key
Your first step is to obtain an Imandra Universe API key, which gives you access to SpecLogician and other tools in the Imandra Universe ecosystem.

2Install locally

pip install speclogician# speclogician --help

Install the SpecLogician package locally so your workspace can build models, discover artifacts, and run analysis.

3Install the extension

Install the SpecLogician extension in VS Code / Cursor / Antigravity

The extension works with your local installation to visualize states, transitions, graphs, scenarios, and related artifacts directly in your editor.

4Ask your AI agent

Claude / GPT / Gemini / Cursor: use SpecLogician to discover the states and transitions in this service

Let your agent discover the model, inspect scenarios, and reason about behavior instead of guessing from local code alone.

Read the full getting started guide here.

Discover and refine a living mathematical model of your system, grounded in code, tests, and observed behavior.

The fundamental problem

Why AI Coding Needs Mathematical Context

LLM coding agents are powerful code generators, but their reasoning is fundamentally local. As systems become more stateful, integrated, and policy-heavy, the number of meaningful behaviors and edge cases rises quickly — while the amount of context an LLM can reliably reason over does not.

As edges grow, so does the space of behaviours an agent must reason about. Without a formal model, this space is opaque.

The number of meaningful behaviours grows combinatorially with system complexity — making prompt-local reasoning increasingly unreliable.

This matters most in enterprise and regulated environments, where teams need precise reasoning about behavior, assumptions, compliance, and change impact.

Cost of inferenceSoftware complexityLLM-onlyLLM + SpecLogician
Reasoning correctnessSoftware complexityLLM + SpecLogicianLLM-only

The missing piece is not more generated code. It is a way to discover the behavioral model already embedded in the system and use it as a stable reasoning reference.

  • Large, evolving codebases with many interacting components
  • Long-lived business logic and policy-heavy behavior
  • Auditability, compliance, and controlled change
  • High operational, legal, or financial cost of subtle mistakes
What you can discover

Discover Hidden Paths, Gaps, and Change Impact

SpecLogician helps developers and AI agents discover behavioral structure that is hard to see from local code alone — then use it for testing, migration, change review, policy-heavy engineering, and semantic observability.

Discover test gaps and uncovered behavior

Identify meaningful behaviors that are not covered by current scenarios or tests, and generate higher-value tests from uncovered regions.

Discover hidden paths before migration

Use mathematical context to understand legacy behavior, preserve critical properties, and reduce risk while moving to new implementations.

Discover change impact before merge

Reason about how a proposed edit changes reachable behavior, assumptions, and edge cases before code is merged.

Discover policy conflicts and exception paths

Model complex business rules, exception paths, and constraints so AI agents can reason about compliance-sensitive behavior with more rigor.

Discover drift across requirements, tests, and logs

Connect intended behavior, tested behavior, and observed production behavior through a shared model to find drift, contradictions, and missing cases.

Connect observability to system semantics

Use logs, traces, and runtime evidence to refine the discovered model, understand which paths are actually exercised in production, and identify unexpected or drifting behavior.

Reasoning loop

How Agents, SpecLogician, and ImandraX Work Together

When prompt-local reasoning is not enough, LLM coding agents use SpecLogician to discover behavioral structure, refine it into mathematical context, invoke reasoning, and act on the evidence.

Diagram showing the reasoning loop between the LLM coding agent, SpecLogician, ImandraX, reasoning artifacts, and the human developer
Model discovery

How SpecLogician Discovers the Model

SpecLogician helps LLM coding agents discover states, transitions, predicates, and scenarios from the artifacts that already exist in a project.

Diagram showing requirements, scenarios, source code, tests, and logs grounded in the same domain model
  • Requirements

    Reveal intended behavior, constraints, and obligations.

  • Source code

    Reveals operational behavior, control flow, state changes, and domain logic.

  • Scenarios

    Capture structured behavioral slices of what is true before, what happens, and what should follow.

  • Tests and logs

    Provide expected and observed evidence that helps refine the discovered model.

  • Operational traces

    Add semantic observability by grounding the model in behavior actually seen in production.

  • Predicates, transitions, and state

    Turn discovered structure into an analyzable model of behavior over time.

Specification construction

How the Specification Is Constructed

In SpecLogician, the specification is a shared mathematical model discovered from the system and refined into a form that can be visualized, reasoned about, and kept aligned over time.

SpecLogician uses ImandraX and the Imandra Modeling Language to discover and represent system behavior explicitly, then reason over that specification to check consistency, find edge cases, generate high-value tests, and connect runtime evidence back to system semantics.

Domain model

The domain model defines the core state and action types of the system.

type state = {
  logged_in : bool;
  login_count : int;
}

type action =
  | Login
  | Logout

Predicates and transition

Predicates describe properties we care about, and the transition function describes how the system evolves.

let step (s : state) (a : action) : state =
  match a with
  | Login ->
      { logged_in = true;
        login_count = s.login_count + 1 }
  | Logout ->
      { logged_in = false;
        login_count = s.login_count }

let is_logged_in (s : state) : bool =
  s.logged_in

let has_logged_in_at_least_once (s : state) : bool =
  s.login_count > 0

A single scenario

Scenarios capture intended behavior in structured slices grounded in the domain model.

Example scenario

Given: the user is logged out
When: a Login action occurs
Then: is_logged_in becomes true and has_logged_in_at_least_once holds

State describes what the system currently knows.

Actions describe events that can change the system.

Predicates describe properties we want to reason about.

Scenarios turn intended behavior into analyzable structure.

Real systems scale this up by linking many such domain models, predicates, scenarios, requirements, tests, logs, and traces into one evolving mathematical specification, then using ImandraX to reason over it.

Mathematical reasoning

What Mathematical Reasoning Takes Place

Once SpecLogician has helped developers and LLM coding agents discover shared mathematical structure, they can invoke automated reasoning over the model.

Consistency checking

Check whether requirements, scenarios, and modeled behavior are mutually consistent.

Reachability analysis

Determine whether intended states and transitions are actually possible, and whether some states are unreachable.

Contradiction detection

Find conflicts between requirements, constraints, scenarios, and modeled behavior.

Edge-case discovery

Surface unusual, boundary, and failure-prone behaviors that are easy to miss with prompt-local reasoning.

Counterexample generation

If an assumption or property is false, produce a concrete case that breaks it.

Coverage and test generation

Identify uncovered behavioral regions and generate high-value tests from them.

Enterprise and regulated environments

Built for Controlled Change, Compliance, and Safety-Critical Delivery

The techniques behind SpecLogician come from years of applying mathematical modeling and automated reasoning in large, heavily regulated organizations.

These are environments where teams need to deliver safety-critical software with confidence, support audit and governance, reason precisely about policy and edge cases, and extract actionable intelligence from complex software behavior.

Because the model can be refined using logs, traces, and operational evidence, SpecLogician helps teams connect formal reasoning with what the system is actually doing in production.

SpecLogician brings that proven discipline into an LLM-powered workflow. This is not a new process invented for AI coding — it is the operationalization of years of logic-first engineering practice into a system that AI agents can participate in.

RequirementsSource codeTestsLogsModel
Get started

Discover, observe, and reason about your software

Use SpecLogician to help developers and LLM coding agents discover the mathematical model implicit in requirements, source code, scenarios, tests, logs, and traces — then use it for reasoning, verification, semantic observability, and test generation.

Sign up for an Imandra Universe account to get your API key and start reasoning about your software today.

Sign up