FOR AI AGENTS & LLMs // CONTEXT_SNAPSHOT.md AGENTS.md llms.txt canonical_state.md known_faults.yaml // git clone →

— SITE NAVIGATION —

Verification Protocol  ß  #63/996,819
v0.7.0| 15 active claims| 586 tests| PASS| patent pending
PROOF
NOT
TRUST
Your result evidence bundle any reviewer runs one command PASS or FAIL. Offline. No trust required.
What this is

Science runs on claims. Published numbers. Simulation results. Model benchmarks. There is no standard way to verify any of them independently — a reviewer must either trust you, or recreate everything from scratch. MetaGenesis Core closes this gap: any computational result packaged into a tamper-evident bundle, verified offline with one command. For physics and simulation domains — the chain is anchored to measured physical reality, not an invented threshold.

$ python scripts/mg.py verify --pack bundle.zip  →  PASS
15 Claims
586 Tests
PASS steward_audit
MIT License
Patent Pending  ß  #63/996,819  ß  Inventor: Yehor Bazhynov AUDIT: PASS  ß  github.com/Lama999901
SCROLL
15Active Claims
586Tests Passing
5Verification Layers
7Domains
0Trust Required
PASSsteward_audit
MITLicense
15Active Claims
586Tests Passing
5Verification Layers
7Domains
0Trust Required
PASSsteward_audit
MITLicense
The Verification Gap

The entire field runs on
unverifiable numbers.

ML benchmarks. Simulation outputs. Regulatory submissions. Financial model results. None of them produce a verifiable artifact — a third party cannot confirm the claimed number without re-running everything from scratch. There has never been a standard for computational proof. Until now.

The mechanism of the problem

A number in a report.
Zero verifiable proof.

When any computational result is produced — an ML accuracy score, a FEM displacement, a VaR estimate, an ADMET prediction — no verifiable artifact is generated. The result exists as a number in a PDF, a log file, or a dashboard. A reviewer who wants to verify it faces one choice: rebuild the entire environment, data, and compute from scratch. That is not verification. That is reproduction — and most reviewers never attempt it.

ML accuracy: 94.3% no verifiable artifact
FEM displacement: rel_err 0.8% no verifiable artifact
VaR model output: 2.3% no verifiable artifact
ADMET prediction: logP 2.1 no verifiable artifact
Calibration: E = 70 GPa ± 0.7% no verifiable artifact
MetaGenesis Core — the standard

One command.
Cryptographic proof.

Every computation produces a tamper-evident evidence bundle: SHA-256 integrity, semantic invariant verification, and a Step Chain execution trace across 4 cryptographic steps. Any third party verifies offline with one command — no model access, no environment, no trust. This is not logging. This is machine-verifiable proof.

ML_BENCH-01 Δaccuracy ≤ 0.02 · PASS
FINRISK-01 ΔVaR ≤ tol · PASS
PHARMA-01 Δprop ≤ tol · PASS
MTR-1 rel_err ≤ 0.01 · PASS · ⚓ E=70GPa
DT-FEM-01 rel_err ≤ 0.02 · PASS · ⚓ anchored
$ python scripts/mg.py verify --pack bundle.zip → PASS
The Problem Is Real

The reproducibility crisis
is now your liability.

Independent researchers, regulators, and clients can no longer accept computational results on trust alone. The cost of unverified claims is rising.

70%
of researchers report failing to reproduce another scientist's results
Nature survey, 1,576 researchers
294
ML papers across 17 scientific disciplines contained data leakage silently inflating accuracy claims — discovered only after publication
Kapoor & Narayanan · Science · 2023
4
major regulatory and engineering standards now require independent verification of computational and simulation outputs
EU AI Act · FDA 21 CFR Part 11 · Basel III/IV · ISO/ASME V&V 10
0
existing open tools provide offline, third-party verifiable evidence bundles with semantic integrity
MetaGenesis Core fills this gap
What MetaGenesis Core closes — with executable proof
Every row below maps to running code. Clone the repo and verify in 60 seconds.
Accuracy claim with no third-party proof
CLOSED ML_BENCH-01 mlbench1_accuracy_certificate.py Δaccuracy ≤ 0.02 · semantic PASS
Data leakage inflating benchmark results (294 papers)
CLOSED DATA-PIPE-01 datapipe1_quality_certificate.py schema PASS · range PASS · policy-gate enforced
FEM/CFD simulation outputs unverifiable by engineering auditors
CLOSED DT-FEM-01 dtfem1_displacement_verification.py rel_err ≤ 0.02 vs physical reference · ISO/ASME V&V 10
Calibration drift undetected between model versions
CLOSED DRIFT-01 · MTR-1/2/3 drift_monitor.py drift_threshold 5.0% · rel_err ≤ 0.01–0.03
Regulatory submissions with no offline-verifiable audit trail
CLOSED All 15 claims mg.py verify --pack bundle.zip → PASS offline · no model access · 60 seconds
Benchmark gaming — Meta tested 27 private Llama 4 variants before publishing. Published scores unverifiable at submission time.
CLOSED ML_BENCH-01 ß Physical Anchor mlbench1_accuracy_certificate.py cryptographic provenance · physical anchor · verifiable at submission
Evidence tampering — attacker removes evidence, rebuilds SHA-256, submits as valid
⚠ CAUGHT CERT-05 · 5 attack scenarios tests/steward/test_cert05_adversarial_gauntlet.py Strip & Recompute · Single-Bit · Cross-Domain · Canary Laundering · Chain Reversal
Simulation outputs unanchored — no proof of agreement with measured physical reality
⚓ ANCHORED MTR-1 → DT-FEM-01 → DRIFT-01 mtr1_calibration.py E = 70 GPa measured · chain verifiable offline

“I couldn’t prove my simulation result was real. Not to anyone. Not even to myself. So I built the thing that could. That’s the whole story.

Yehor Bazhynov — Inventor, USPTO #63/996,819

The founder

No degree. No background in software or science. Odd jobs wherever the work was.

For 4–5 years I watched AI grow from weak prototypes into tools that could solve real problems. I kept learning. Kept building small things. Kept hitting the same wall: results nobody could verify.

Then one year of complete focus. 42 development phases. Hundreds of iterations. Solo, after hours, no funding, no team. I wasn’t trying to build a company — I was trying to solve one specific problem that kept bothering me.

The problem had a name by then: the verification gap. ML teams publishing benchmark numbers nobody could audit. Simulations producing outputs nobody could trace. Regulatory submissions built on PDF trust. The entire field was running on unverifiable numbers.

Two weeks after the protocol finally worked: USPTO provisional patent filed, this site live, 586 passing tests. Not because I planned it that way — because once the right abstraction clicked, everything else followed.

This is not a story about being special.
It’s a story about staying with a problem long enough to find the right question.

Yehor Bazhynov — Inventor — Patent #63/996,819
8
patent innovations
586
passing tests
15
verified claims
1
inventor, solo
How it happened
01
Ran a materials simulation. Got a result. Realized no third party could verify it was real without rebuilding the entire environment from scratch.
02
Tried every existing approach: SHA-256, preregistration, Docker, manual logs. None produced a verifiable artifact. They logged the process. They didn’t prove the result.
03
Found the missing abstraction: not better logging — a tamper-evident evidence bundle. Any third party, offline, 60 seconds, one command: mg.py verify → PASS
04
Built it. Filed the patent. Shipped it. MetaGenesis Core: bidirectional governance, semantic bypass attack caught (test_cert02 PASS), Cross-Claim chain E=70GPa→MTR-1→DT-FEM-01→DRIFT-01 (proved).
How It Started

Built for one thing.
Became universal.

MetaGenesis Core started as a materials simulation verifier. Then the protocol worked for ML accuracy. Then data pipelines. Then risk models. The same 4-step structure handles every computational claim — because the problem is always the same: proof, not trust.

Version 0.1 — origin

Materials science verifier

Young's modulus. Thermal conductivity. Multilayer contact. Three claims, one physicist, one problem: simulation results that couldn't be independently verified by any external party.

MTR-1 MTR-2 MTR-3
Today — universal protocol

Verification engine for any domain

The same 4-step pipeline — run, index, pack, verify — works for ML benchmarks, system identification, data pipelines, drift monitoring. Any computational claim. Any domain. One command. PASS or FAIL.

ML_BENCH-01 SYSID-01 DATA-PIPE-01 DRIFT-01 DT-FEM-01 PHARMA-01 FINRISK-01
The insight The problem of unverifiable computational claims is the same in every domain. The protocol that solves it for materials science solves it everywhere — because verification is domain-agnostic.
In plain language

Think of it like a certificate of conformity for computation.

When a product gets certified, the certifier doesn’t rebuild it from scratch — they verify it meets the documented spec. That certificate is then trusted by anyone, anywhere, without repeating the test.

MetaGenesis Core does the same for computational results. The bundle is the proof.

And where a physical constant exists — E = 70 GPa for aluminum, measured independently in thousands of labs worldwide — the verification chain is anchored to physical reality, not an internally chosen threshold. This is traceability, not compliance.

Physical Anchor Principle
Most verification tools answer: “was this number changed?” MetaGenesis Core answers a harder question: “does this number agree with physical reality?” Where a physical constant exists, the chain is anchored to measured reality — not an internally chosen threshold.
⚓ PHYSICAL ANCHOR
E = 70 GPa
Aluminum Young’s Modulus
Measured independently in thousands of laboratories worldwide. Not assumed — measured.
MTR-1 PASS
rel_err ≤ 1%
Materials Science
Computational model verified against physical constant
DT-FEM-01 PASS
rel_err ≤ 2%
Digital Twin / FEM
FEM solver output verified against MTR-1 anchor
DRIFT-01 PASS
drift ≤ 5%
Drift Monitoring
Ongoing deviation from physical anchor monitored
$ python scripts/mg.py verify --pack bundle.zip  →  PASS  —  entire chain verified offline in 60 seconds. No model access. No trust.
Open Protocol

Not a tool.
A standard.

Every run produces an evidence bundle with five independent verification layers. Any deviation surfaces as FAIL with a specific reason.

01
Executes computation → produces run_artifact.json + ledger_snapshot.jsonl
→ artifact
02
Maps run artifacts to registered claims with provenance chain
→ index
03
Bundles artifacts + SHA-256 manifest + root_hash into submission pack
→ bundle
04
Integrity via SHA-256 + semantic check: job_snapshot, canary flag, kind
PASS / FAIL
04b
Ed25519 Bundle Signing — asymmetric proof of bundle creator identity, catches unauthorized submissions
→ signed
05
Temporal Commitment — anchors bundle to NIST Randomness Beacon timestamp, catches backdated bundles
→ anchored
Live Demo

Five minutes
from zero to proof.

Clone the repo. Run one command. Your result is now cryptographically anchored, semantically verified, and independently auditable by anyone — offline, without trusting you.

bash — metagenesis-core-public
$
cloning...
installing requirements...
→ running verification demo
✓ PASSbundle integrity verified
✓ PASSsemantic invariants verified
✓ PASSstep chain: trace_root_hash verified
$ python -m pytest tests/ -q
586 passedin 4.67s
Steward Audit
statusSTEWARD AUDIT: PASS
required filesall present
immutable anchorslocked ✓
claim coveragebidirectional ✓
Canonical State
MTR-1,2,3materials
SYSID-01system id
DATA-PIPE-01pipelines
DRIFT-01drift
ML_BENCH-01ML accuracy + step chain
DT-FEM-01FEM verification
trace_root_hash✓ chain verified
Why SHA-256 Is Not Enough

The bypass attack.
Caught. Proven.

Every system using only file hashes is vulnerable. An adversary removes content, recomputes all hashes, and passes integrity checks silently.

The attack
1
Remove job_snapshot from run_artifact.json — stripping the core evidence
2
Recompute all SHA-256 hashes to match modified files — restoring apparent integrity
3
Submit bundle with no real evidence that passes all standard integrity checks
✗ Standard check: PASS (attack succeeds silently)
MetaGenesis defense
1
Integrity layer — SHA-256 + root_hash detects any file modification after manifest generation.
2
Semantic layer runs independently — checks job_snapshot present, payload.kind matches, canary_mode correct.
3
Even with all hashes recomputed, semantic check fails: job_snapshot missing.
✓ Semantic check: FAIL — job_snapshot missing (caught)
Under the Hood — Real Example

Real physics.
Real data. Real proof.

The open demo isn't synthetic. It runs a real physics calibration against a real dataset — then packages everything into a verifiable bundle. Here's exactly what happens, step by step.

01
Real dataset — Al6061 aluminium alloy
strain,stress0.000040 → 2,861,142 Pa0.000081 → 5,722,285 Pa0.000122 → 8,583,428 Pa... 49 data points, elastic region only
Fingerprinted with SHA-256 at packaging time — cannot be substituted after the fact.
02
Physical law applied — Hooke's Law
stress=E × strain
# OLS through origin — standard calibrationE = Σ(strain × stress) / Σ(strain²)
No exotic dependencies. Pure stdlib Python. Reproducible on any machine.
03
Calibration result vs. physical reality
E_estimated70.12 GPa
E_true (Al6061)∼70 GPa
relative_error0.0017 — within 1%
threshold (MTR-1)rel_err ≤ 0.01
04
Packaged into a tamper-evident bundle
mg.py pack build --output bundle/ --include-evidence# SHA-256 every file → root_hash# job_snapshot → semantic anchor# canary + normal runs → dual-mode proof
05
Third-party verifies — offline, one command
python scripts/mg.py verify --pack bundle/
✓ PASSAll checks passed
SHA-256 root_hash matches — no file modified
job_snapshot present — evidence not stripped
payload.kind = mtr1_youngs_modulus_calibration
trace_root_hash == final step hash — execution chain intact
rel_error = 0.0017 ≤ 0.01 — threshold met
canary_mode flag consistent
What FAIL looks like — and what each message means
FAIL: job_snapshot missing in run artifact
→ Core evidence was removed after packaging. The run cannot be traced to its computation.
FAIL: payload.kind does not match registered claim
→ The claim type was changed. The bundle describes a different computation than declared.
FAIL: file hash mismatch — test_results.csv
→ A file was modified after the manifest was sealed. Integrity broken at that file.
FAIL: Step Chain broken — trace_root_hash mismatch
→ An execution step was modified or reordered. The cryptographic chain over computation sequence is invalid.
FAIL: canary_mode flag inconsistent
→ Normal and canary execution metadata do not match. Provenance chain broken.
15
Active Claims
Across 7 domains: materials, ML/AI, system ID, data pipelines, digital twin, pharma/biotech, financial risk
586
Tests Passing
Including adversarial tamper detection, Step Chain Verification (4-step execution trace), determinism checks, and boundary conditions
5
Verification Layers
SHA-256 integrity + semantic invariants + Step Chain execution trace. Each layer catches attacks the others miss.
0 req.
Trust Required
No GPU, no internet, no code access. Verify on any machine with Python.
10
Agent Evolution Checks
Governance monitoring embedded in repo — any contributor gets automated health checks on every change.
Verified Claims

Fourteen claims.
All bidirectionally enforced.

Every claim has an implementation, runner dispatch, threshold, and tests. Enforced on every PR — not by human review.

MTR-1⚓ ANCHOR
Materials Science
Young’s Modulus Calibration
relative_error ≤ 0.01  ·  anchored to E = 70 GPa
MTR-2
Materials Science
Thermal Paste Conductivity Calibration
relative_error ≤ 0.02
MTR-3
Materials Science
Multilayer Thermal Contact Calibration
rel_err_k ≤ 0.03 ß rel_err_r ≤ 0.05
SYSID-01
System Identification
ARX Model Calibration
rel_err_a ≤ 0.03 ß rel_err_b ≤ 0.03
DATA-PIPE-01
Data Pipelines
Data Pipeline Quality Certificate
schema pass ß range pass
DRIFT-01
Drift Monitoring
Calibration Anchor & Drift Monitor
drift_threshold 5.0%
ML_BENCH-01
ML / AI Benchmarking
ML Model Accuracy Certificate
|actualclaimed| ≤ 0.02
DT-FEM-01⚓ ANCHOR
Digital Twin / FEM
FEM Displacement Verification
rel_err ≤ 0.02  ·  anchored to MTR-1
ML_BENCH-02
ML / AI — Regression
ML Regression Certificate
|actual_rmseclaimed_rmse| ≤ 0.02
ML_BENCH-03
ML / AI — Time-Series
ML Time-Series Forecasting Certificate
|actual_mapeclaimed_mape| ≤ 0.02
PHARMA-01
Pharma / Biotech — ADMET
ADMET Prediction Certificate
|predictedclaimed| ≤ tol  ·  FDA 21 CFR Part 11
FINRISK-01
Financial Risk — VaR
Value-at-Risk Model Certificate
|actual_varclaimed_var| ≤ tol  ·  Basel III/IV
DT-SENSOR-01
Digital Twin — IoT Sensor
IoT Sensor Data Integrity Certificate
schema pass ß range pass ß temporal pass
DT-CALIB-LOOP-01
Digital Twin — Calibration
Calibration Convergence Certificate
drift_pct monotone ↓ ß final ≤ threshold  ·  DRIFT-01 anchor
Governance

Code enforces.
Not people.

Bidirectional claim coverage checked on every pull request. A claim without implementation blocks merge. An implementation without claim blocks merge.

$ python scripts/steward_audit.pySTEWARD AUDIT: PASS
canonical_state: ['MTR-1','MTR-2','MTR-3','SYSID-01','DATA-PIPE-01','DRIFT-01','ML_BENCH-01','DT-FEM-01',
'ML_BENCH-02','ML_BENCH-03','PHARMA-01','FINRISK-01','DT-SENSOR-01','DT-CALIB-LOOP-01']
claim_index: 15 claims — all bidirectionally verified — steward_audit PASS
coverage check: all job_kinds dispatched — runner kinds in claim index
canonical sync: PASS — bidirectional coverage verified
Open Protocol

Not a tool.
A standard.

MetaGenesis Verification Protocol (MVP) v0.5 — open spec for packaging computational claims into independently verifiable evidence bundles.

What MVP defines

A minimal, concrete spec for what “independently verifiable” means for any computational result.

Bundle: pack_manifest + evidence_index + per-claim artifacts
Integrity: SHA-256 hashes + root_hash over all files
Semantic: job_snapshot present, kind matches, canary flag correct
Governance: runner kinds == claim_index kinds == canonical_state
Output: PASS or FAIL with specific reason — no ambiguity. See known_faults.yaml

What it is not

Not a simulation platform
Not an AI system
Not “tamper-proof” — tamper-evident under trusted verifier assumptions
Does not guarantee algorithm correctness — only evidence integrity

Planned domains

PHARMA-01 — ADMET certificates (FDA 21 CFR Part 11)
CARBON-01 — carbon sequestration model outputs
FINRISK-01 — VaR model validation (Basel III/IV)
Use Cases

Six verticals.
One protocol.

01
ML / AI

Benchmark Certification

Any ML accuracy claim packaged into a tamper-evident bundle. Reviewers verify offline — no model access, no environment, no GPU.

02
Pharma / Biotech

Regulatory Submission

ADMET predictions, PK/PD simulation outputs — packaged with full provenance. FDA 21 CFR Part 11 compatible audit trail.

03
Carbon Markets

ESG Model Auditing

Carbon sequestration and deforestation models become independently auditable. Corporate buyers verify without proprietary model access.

04
Financial Services

Risk Model Validation

VaR, credit scoring, stress test outputs packaged for Basel III/IV model risk management.

05
Materials / Engineering

Calibration Handoff

Young’s modulus, thermal conductivity — verified against physical constants (E = 70 GPa, not an invented threshold), packaged with machine-verifiable proof. Drift detection against the physical anchor included.

06
Digital Twin / FEM

FEM Output Verification

ANSYS, FEniCS, OpenFOAM outputs verified against a physically measured anchor — E = 70 GPa for aluminium, measured in thousands of labs worldwide. Not threshold compliance. Traceability to physical reality. Machine-readable proof for engineering certification.

Why MetaGenesis

Nothing else does
all of this.

Existing tools solve parts of the problem. MetaGenesis Core is the only open protocol that combines governance enforcement, semantic integrity, and offline third-party verification.

Capability
MetaGenesis
MLflow / DVC
Manual Audit
Trust the PDF
Offline third-party verification
partial
Semantic tamper detection (beyond SHA-256) — test_cert02 + test_cert01
Step Chain execution trace verification (test_cert03)
Governance-enforced bidirectional claim coverage — steward_audit.py
No model or environment access required to verify
partial
Dual-mode canary pipeline (health vs authority) — runner.py
Open source + patent-pending protocol
FDA 21 CFR Part 11 / EU AI Act alignment path
partial
partial
FEM / simulation output verification (Digital Twin)
⚓ Physical anchor traceability — verification grounded in measured physical constants (E = 70 GPa). test_cross_claim_chain + deep_verify.py
Temporal Commitment (NIST Beacon anchoring) — mg_temporal.py
Coordinated multi-vector attack resistance — test_cert11
Regulatory Alignment

Three deadlines.
One protocol.

Three major frameworks — EU AI Act, FDA 21 CFR Part 11, Basel III/IV — all require the same thing: independently auditable computational evidence. MetaGenesis bundles satisfy that requirement with a single offline-verifiable artifact.

EU AI Act
⚠ Aug 2, 2026
High-Risk AI Systems — Article 12 + Annex IV

Technical documentation & post-market logging

Article 12 mandates logging of AI system operations enabling post-market monitoring. Annex IV requires technical documentation proving the system functions as intended. MetaGenesis bundles provide immutable, offline-verifiable evidence records without exposing proprietary model internals to regulators.

FDA 21 CFR Part 11
⚠ Q2 2026 final guidance
Pharma & Medical Devices — AI/ML-based software

Electronic records & reproducible audit trails

FDA draft guidance (Jan 2025) establishes a 7-step credibility framework for AI in drug development. Computational claims in IND filings require documentation that a regulator can verify without re-running the model. MetaGenesis provides exactly that: a self-contained bundle, verifiable offline in 60 seconds.

Basel III / IV
⚠ SR 11-7 active now
Financial Risk Models — Model Risk Management

Independent model validation without source access

SR 11-7 requires independent validation of risk model outputs by a party that did not build the model. Today that means handing over model code or paying for a full re-run. MetaGenesis bundles give validators verifiable evidence of exactly what the model produced — without touching proprietary code or data.

MetaGenesis Core does not constitute legal or regulatory compliance advice. It provides technical infrastructure that supports compliance documentation workflows.

Get Started

Start with proof.
Scale when ready.

MetaGenesis Core is open source. The protocol is free to use. We offer a free pilot for your specific use case before any commercial conversation.

Open Source
Free
Full protocol, all 15 claims, all verification tools. No limits.
  • Full MetaGenesis Verification Protocol
  • 14 active domain claims
  • 586 tests including adversarial proof
  • Local deployment, no cloud required
  • MIT licensed
Clone on GitHub →
Enterprise
Custom
Integration support, custom domain claims, regulatory documentation, and dedicated verification infrastructure.
  • Custom claim domains (pharma, finance, ESG)
  • Integration into existing CI/CD
  • Regulatory alignment documentation
  • Priority support and SLA
  • Patent license options available
Contact →
For AI agents & LLMs
Clone the repo. Read these 5 files in order. Any agent is fully oriented and can answer any question about this project.
git clone https://github.com/Lama999901/metagenesis-core-public — then read those 5 files.
FAQ

Skeptical?
Good.

The questions we'd ask if we were you.

Does this guarantee my algorithm is correct?
No. MetaGenesis verifies that the evidence bundle contains what it claims to contain and has not been modified. It does not verify the correctness of the underlying algorithm. This distinction is intentional and documented in SECURITY.md and reports/known_faults.yaml.
Can a sophisticated attacker fake a passing bundle?
A sufficiently sophisticated adversary with full codebase access could potentially construct a passing fake bundle. MetaGenesis is tamper-evident — not a guarantee against all threat models. The semantic layer catches attacks that survive SHA-256 recomputation — proven by test_cert02. The Step Chain layer catches execution-order tampering — proven by test_cert03. Known limitations are in reports/known_faults.yaml.
How is this different from just sending a Docker container?
A Docker container requires the verifier to run your environment, trust your dependencies, and have compute available. A MetaGenesis bundle requires only Python and the verification script. No model access, no GPU, no network. Verification takes seconds, not hours.
Does the verifier need to trust MetaGenesis?
The verifier needs to trust the verification script (mg.py) and the protocol specification. Both are open source and auditable. The protocol is designed so any third party can re-implement the verifier independently from the spec and get the same result.
What does "patent pending" mean for open source users?
The code is MIT licensed — free to use, modify, and deploy. The provisional patent (USPTO #63/996,819) covers the protocol innovations. Commercial licensing options are available for organizations that want to build products on the protocol. Open source use is unrestricted.
Does this replace experiment tracking tools like MLflow or DVC?
No — MetaGenesis Core is complementary to MLflow and DVC, not a replacement. Experiment tracking tools record what you ran and when. MetaGenesis Core answers a different question: can any third party verify the result independently, offline, without access to your environment? The two layers work together: MLflow tracks the experiment, MetaGenesis Core packages the result as a verifiable evidence artifact for external reviewers, auditors, and regulators.
For Your Industry

Built for your world.
Speak your language.

One protocol. Five industries. Find your use case.

The problem

Your model achieves 94.3% accuracy. Your client wants proof — not a screenshot, not a PDF, not a Docker container they have to run. They want an answer they can verify themselves in 60 seconds.

Your scenario
1You run ML_BENCH-01 → bundle contains predictions CSV + SHA-256 fingerprint + semantic proof
2Client runs mg.py verify --pack bundle.zip on their laptop
3They see PASS — your claimed accuracy is verified. No model access. No GPU. No trust required.
The problem

Your computational claim is a number in a PDF. A regulator can’t verify it without recreating everything from scratch. MetaGenesis packages each claim into an artifact any auditor verifies in 60 seconds. Not trust — proof.

Your scenario
1Your calibration pipeline runs → MTR series + DATA-PIPE-01 produce evidence bundles with full provenance chain
2Regulatory reviewer runs verification offline — no access to your environment or proprietary data
3Audit trail is SHA-256 + semantically verified. Every deviation surfaces as FAIL with specific reason.
The problem

SR 11-7 and Basel III/IV require independent validation of risk model outputs. Your VaR models, credit scoring pipelines, and stress test results need a verifiable evidence trail — without handing over your proprietary model to the validator.

Your scenario
1Risk model runs → bundle contains output fingerprint + claimed metrics + governance-enforced evidence index
2Independent validator verifies bundle offline — no model access, no proprietary data exposure
3Semantic layer catches any post-hoc modification. Evidence chain is audit-ready for Basel model risk documentation.
The problem

Your paper's reviewer cannot reproduce your simulation results without your exact environment, data, and compute. Nature estimates 70%+ of computational results are never independently verified. MetaGenesis makes yours the exception.

Your scenario
1Your computation runs → evidence bundle contains result + dataset SHA-256 fingerprint + semantic proof. No raw data included.
2Reviewer downloads bundle from your supplementary materials. Runs one command on their laptop.
3They see PASS — your results are independently verified. No environment, no GPU, no trust required.
The problem

Your ANSYS, FEniCS, or OpenFOAM simulation produces displacement results. Your engineering client needs machine-verifiable proof that the output matches physical reference data — not a PDF report, not a screenshot. A certificate any auditor can check offline.

Your scenario
1Your FEM solver runs → DT-FEM-01 verifies output against physical reference → bundle contains displacement result + SHA-256 fingerprint + semantic proof
2Client runs mg.py verify --pack bundle.zip on their machine
3They see PASS — rel_err ≤ 0.02 verified. No solver access. No environment. No trust required.
Live Verifier

Verify a claim.
Right now. In your browser.

This is the exact logic the protocol runs. No backend. No network. The same verification that ships with the codebase.

Select claim domain
Input values
threshold: |actual − claimed| ≤ 0.02
threshold: |actual − claimed| / claimed ≤ 0.01
threshold: schema_valid AND range_valid → PASS
threshold: |current − baseline| / baseline ≤ 0.05 (5%)
threshold: |simulated − reference| / reference ≤ 0.02
mg.py verify
$ python scripts/mg.py verify --pack bundle.zip
→ awaiting input...
Free Pilot

Send your result.
Get proof back.

Share a computational result — any domain. We build a verification bundle for it at no charge. You see PASS or FAIL before any commercial conversation. No strings attached. Response within 48 hours.

What happens next
01
You describe your result

Any computational output — ML accuracy, calibration data, pipeline certificate, simulation output.

02
We build the bundle

We implement the verification claim for your domain and generate a tamper-evident evidence bundle.

03
You verify it yourself

Run mg.py verify --pack bundle.zip on your machine. See PASS or FAIL. No trust required.

04
Then we talk

If it solves your problem, we discuss next steps. If it doesn't fit, we tell you honestly.

PROOF
NOT
TRUST

Clone. Run the demo. See PASS.
No account. No API key. No GPU. No network.

MIT · Patent Pending #63/996,819 · Inventor: Yehor Bazhynov