LiminalX · /assess · For AI product teams

Know where your product stands
before the market decides.

You built it. You validated it. But how does your evidence hold up under independent scrutiny—from a hospital procurement team, an investor, or a regulator?* /assess gives you that picture, on your terms, before anyone else does—designed to address the evidence assessment guidance published by IAEA, WHO, FDA, and READI.

18 Failure Modes8 Validation Methods8 Evidence Anchors
See the VERA™ framework ↓
VERA™ ANALYSIS—LIVERunning
V—18 FAILURE MODES
f0
f1
f2
f3
f4
f5
f6
f7
f8
f9
f10
f11
f12
f13
f14
f15
f16
f17
E—VALIDATION (8) · R—ANCHORS (8)
v0
v1
v2
v3
v4
v5
a0
a1
a2
a3
a4
a5
3.1VERA™ Score
80%Coverage
MEDConfidence
The Framework

What is VERA™?

VERA™ is the analytical framework at the core of every /assess engagement. It provides a structured, reproducible method for mapping your product's evidence against every known failure mode in clinical AI—grounded in IAEA, WHO, FDA, and READI reference standards—and produces a VERA™ Score and VERA™ Report that documents where evidence is strong, thin, or absent.

V
Validation Methodology
18 failure modes across 6 categories—Input, Knowledge, Generalisation, Bias, Deployment, and Evaluation. For each product type, applicable failure modes are identified and scoped; non-applicable ones are excluded from scoring and coverage.
E
Evidence sourcing
Independent retrieval across 16 source categories. Independence measured at institution level. Vendor-connected sources are weighted separately via T3 intelligence to ensure scores reflect genuinely independent evidence.
R
Robustness validation
8 validation methods—adversarial probing, behavioural consistency, ground truth anchoring, provenance tracing, consensus, human review, statistical analysis, and traceable sourcing—used to evaluate how evidence addresses each failure mode.
A
Assessment
The scored output: a VERA™ Score (0–5.0) per failure mode, an overall VERA™ Score, and a complete VERA™ Report with source trail, independence breakdown, confidence ratings, and ranked gaps.
VERA™ · 34 ASSESSMENT DIMENSIONSV · E · R · A
Who it's for

Built for the people who built the product

/assess is for AI product teams that want an honest, independent read on where their evidence stands—at any stage of development or commercialisation.

Pre-clearance
Before regulatory review,* understand how strong your current evidence is against every applicable failure mode. Know the gaps before a reviewer does. Focus remaining validation where it matters most.
R&D · Clinical Affairs · Regulatory · Marketing
Post-clearance
Cleared doesn't mean evidenced. Procurement teams and medical directors run their own due diligence. /assess tells you what that scrutiny will find—and gives you the documentation to get ahead of it.
Commercial · Market Access · BD
Investment readiness
Sophisticated investors ask harder questions about clinical AI evidence. An independent VERA™ Report signals product maturity—and surfaces gaps before they appear in a data room.
CEO · CFO · Series A–C
Ongoing monitoring
Your product is deployed. Literature accumulates. Competitors publish. Subscription-based quarterly reassessment keeps your VERA™ Score current as the evidence landscape shifts.
Product · Post-market · QA · Marketing
The blind spots

What you don't know about your own evidence

Most teams have never mapped their evidence against an external, structured failure-mode framework. Here's what that leaves invisible.

01
Your evidence may not be as independent as it looks
If most of your published validation studies are from your own institution or co-funded by your team, they don't count as independent evidence under scrutiny. /assess distinguishes genuinely independent institutional sources from vendor-connected evidence—and flags the gap clearly.
02
Strong aggregate scores can hide critical failures
An area under the receiver operating curve of 0.97 is a headline. But is there evidence of performance across demographic subgroups? On scanners beyond your validation set? Over time? VERA™ maps your evidence failure-mode by failure-mode, so nothing hides behind the average.
03
Regulatory clearance doesn't answer procurement questions
510(k) clearance is substantial equivalence—not a statement about demographic equity, scanner generalisation, or deployment failure risk. The questions procurement teams actually ask are the ones VERA™ is designed to answer first.
How it works

A four-tier evidence engine

Every /assess engagement runs the same structured process—systematic, transparent, and reproducible.

T4
Field Prior
Load field context
Prior risk profile for your product class loaded from the field library—calibrated against real-world failure mode prevalence.
Field context library independent of product submission, refreshed quarterly
T1
Named Evidence
Search 16 source categories
Peer-reviewed literature, regulatory filings, clinical trials, post-market data, patents, and guidelines—all searched for evidence naming your specific product.
Target: ≥3 independent sources per failure mode
T2
Peer Context
Escalate where evidence is thin
Targeted escalation per failure mode only. Where T1 falls short, competitor products and technology-class literature provide context. Honest ceiling flagging where neither can build a case.
Targeted · No fabricated confidence
A
Assessment
VERA™ Score & Report
Evidence scored across all applicable VERA™ failure modes. Every failure mode scored 0–5.0. Full source trail documented. VERA™ Report delivered within 14 days.
34 assessment dimensions · 14-day SLA
T1 outcome—per failure mode
T1 FINDS
≥ 3 independent
Scored directly. T2 not needed.
T1 FINDS
1–2 independent
T2 triggered. Peer context searched.
T1 FINDS
0 sources
T2 auto-triggered. Technology class used as proxy.
T1 + T2 STILL
Insufficient
Provisional flag. Honest ceiling. No fabricated confidence.
T3 · Vendor Intelligence · Parallel
Running throughout T1 and T2, we check FDA submission history, warning letters, author funding, and advisory board affiliations. T3 adjusts the weight applied to vendor-controlled sources—so your VERA™ Score reflects genuinely independent evidence, not publication volume.
The Deliverable

A VERA™ Report you can act on

VERA™ REPORT · DRAFT 1 · CONFIDENTIAL
EXECUTIVE SCORES
3.1VERA™ Score / 5.0
80%Coverage
MEDConfidence
FAILURE MODE ASSESSMENT MAP
KEY FINDINGS
Strong generalisation—external validation across 3 EU institutions. Failure modes f6–f8 scored 3.9–4.3.
Gap: Demographic disparity (f11)—subgroup data limited to sex only. No race/ethnicity breakdown in any source.
f17 (silent degradation)—evidence ceiling reached. No longitudinal post-deployment data found. Provisional.
Online & downloadable as PDF
Delivered as a dynamic online report—filter by category, drill into individual failure modes, probe the evidence behind each score. Download a full PDF at any time for sharing, compliance records, or investor data rooms.
VERA™ assessment map—all failure modes scored
Every applicable failure mode scored 0–5.0 with confidence rating and full source trail. Non-applicable failure modes documented as excluded.
Independence analysis
Every source tagged by institution and funding. Scores reflect institutional independence, not publication count.
Ranked gaps and risk flags
Critical evidence gaps, provisional failure modes, and vendor intelligence signals ranked by clinical significance.
Specific recommendations
What studies would close the gaps, what post-market commitments to consider, and what to prioritise next.
Full evidence record
Every source, every score, every independence judgment—documented and audit-ready. Delivered within 14 days.
Each failure mode in the VERA™ assessment map is directly correlated with evidence requirements drawn from IAEA, WHO, FDA GMLP, and READI—ensuring the framework is grounded in international standards, not internal methodology alone.
Subscription

Your evidence doesn't stand still

New literature is published. Post-market data accumulates. Competitors complete external validations. Regulators update guidance. Your VERA™ Score from 12 months ago may no longer reflect where you actually stand.

♻ /assess SUBSCRIPTION—QUARTERLY REASSESSMENT
A subscription runs a full VERA™ reassessment every quarter—updating scores against the current evidence landscape, flagging newly published literature, and alerting you to emerging evidence affecting your standing. The T4 field context library refreshes independently of any product submission, quarterly.

Useful for: post-market surveillance, investor reporting cycles, procurement renewal periods, and teams with ongoing clinical responsibility for a deployed product.
Q1
Full VERA™ assessment
All applicable failure modes scoped and scored. Field prior loaded. Baseline VERA™ Score established.
Q2
Update VERA™ Score
New literature incorporated. Score updated where evidence has changed. Delta from Q1 flagged.
Q3
Update VERA™ Score
New literature incorporated. Score updated. Post-market signals added. Delta flagged.
Q4
Update VERA™ Score
Score updated. Annual VERA™ Report issued with year-on-year trend.
Annual report
Continues year on year
What comes next

From evidence to market position

/assess is the starting point—it gives you a clear, independent picture of where your evidence stands. Two further products extend that into market comparison and continuous deployment monitoring.

ENQUIRE MORE
/ assess · LiminalX

Ready for an independent view?

Share your product details and we will run the VERA™ assessment—delivering your report within 14 days.

ENQUIRE ABOUT /assess
*/assess is an independent evidence assessment service. It does not constitute a pre-submission audit, regulatory advice, or a guarantee of clearance from the FDA, CE, or any other regulatory body. Results reflect the state of publicly available and analyst-retrieved evidence at the time of assessment and carry no warranty of regulatory outcome.
LiminalX
© 2026 LIMINALX LLC · ATLANTA, GA · /assess · /bench · ~/signal