Built by auditors who ran the last bad audit manually

We know what it costs to miss a material misstatement. That's why we built a detection system, not another dashboard.

The year we missed 14 days of December entries

James McKinnon spent six years in public accounting — three at a regional firm in Boston, three as an internal audit manager at a mid-cap manufacturer. In 2022, his team completed a year-end audit for a client where the controller had posted 14 backdated adjusting entries across two accounts in December. The entries were individually below materiality. Collectively, they shifted $2.3M of inventory impairment into the following fiscal year.

The entries were discovered — by the client's new CFO, not the audit team — four months after the engagement closed. The firm issued a restatement. Two senior managers resigned. The audit committee asked why a tool designed to detect exactly this pattern wasn't in use. There wasn't a good answer.

James left the firm in early 2023 and spent six months building the first version of AuditPulsar. The initial prototype was a Python script that ran Benford's Law analysis against a trial balance export. By mid-2023, it had a gradient-boosted ML model trained on historical journal entry data. By late 2023, it had its first three paying clients — all firms that had experienced their own version of the same story.

Today AuditPulsar serves accounting firms across the US, from sole practitioners to regional practices with 30+ professionals. The core mission hasn't changed: find the entries that shouldn't be there, before the engagement closes.

financial audit workstation showing journal entry analysis dashboard

What guides how we build

Detection before judgment

AuditPulsar surfaces anomalies. Your auditors make the calls. We don't flag entries as fraudulent — we flag them as statistically unusual and give your team the context to investigate. The audit judgment stays with the professional.

Client data stays private

We process financial data in isolated environments and delete it after the scan completes. We don't use client ledger data to improve our models. This is non-negotiable. Accounting firms cannot afford to have client data leave their control in an unaccountable way.

Auditability of the tool itself

Every scan is logged. Every finding includes a methodology note explaining why the entry was flagged — which statistical test, which ML feature, which threshold. If a regulator asks why an entry appeared in the workpaper, you can answer that question precisely.

What firms report after their first 90 days

31%

Mid-market assurance firm, 18 professionals

Ran AuditPulsar on 22 manufacturing and distribution clients in its first full busy season. Reduced time-in-field on journal entry testing by 31%. Found 3 reportable conditions that weren't flagged in prior-year engagements.

4 hrs

Regional CPA practice, 6 professionals

Replaced a 12-hour manual Excel process with a 4-hour AuditPulsar workflow covering the same population. Staff time reallocated to substantive testing. Client NPS scores improved across three retained clients in year one.

0

Internal audit team, Fortune 500 manufacturer

Zero PCAOB documentation deficiencies in their most recent inspection cycle. The audit committee specifically noted the quality of journal entry testing documentation — generated directly from AuditPulsar's workpaper export.

Where we are today

2023
Founded in Boston, MA
38
Active firm subscribers (as of Q1 2025)
14.7M
Journal entries in training dataset
SOC 2
Type II certified since 2024

See it running on a real ledger

We'll demo AuditPulsar using a sample dataset matching your typical client size. No slide decks. Straight into the tool.

Schedule a Demo