Science runs on claims. Published numbers. Simulation results. Model benchmarks. There is no standard way to verify any of them independently — a reviewer must either trust you, or recreate everything from scratch. MetaGenesis Core closes this gap: any computational result packaged into a tamper-evident bundle, verified offline with one command. For physics and simulation domains — the chain is anchored to measured physical reality, not an invented threshold.
ML benchmarks. Simulation outputs. Regulatory submissions. Financial model results. None of them produce a verifiable artifact — a third party cannot confirm the claimed number without re-running everything from scratch. There has never been a standard for computational proof. Until now.
When any computational result is produced — an ML accuracy score, a FEM displacement, a VaR estimate, an ADMET prediction — no verifiable artifact is generated. The result exists as a number in a PDF, a log file, or a dashboard. A reviewer who wants to verify it faces one choice: rebuild the entire environment, data, and compute from scratch. That is not verification. That is reproduction — and most reviewers never attempt it.
Every computation produces a tamper-evident evidence bundle: SHA-256 integrity, semantic invariant verification, and a Step Chain execution trace across 4 cryptographic steps. Any third party verifies offline with one command — no model access, no environment, no trust. This is not logging. This is machine-verifiable proof.
Independent researchers, regulators, and clients can no longer accept computational results on trust alone. The cost of unverified claims is rising.
“I couldn’t prove my simulation result was real. Not to anyone. Not even to myself. So I built the thing that could. That’s the whole story.”
Yehor Bazhynov — Inventor, USPTO #63/996,819
No degree. No background in software or science. Odd jobs wherever the work was.
For 4–5 years I watched AI grow from weak prototypes into tools that could solve real problems. I kept learning. Kept building small things. Kept hitting the same wall: results nobody could verify.
Then one year of complete focus. 42 development phases. Hundreds of iterations. Solo, after hours, no funding, no team. I wasn’t trying to build a company — I was trying to solve one specific problem that kept bothering me.
The problem had a name by then: the verification gap. ML teams publishing benchmark numbers nobody could audit. Simulations producing outputs nobody could trace. Regulatory submissions built on PDF trust. The entire field was running on unverifiable numbers.
Two weeks after the protocol finally worked: USPTO provisional patent filed, this site live, 586 passing tests. Not because I planned it that way — because once the right abstraction clicked, everything else followed.
This is not a story about being special.
It’s a story about staying with a problem long enough to find the right question.
mg.py verify → PASSMetaGenesis Core started as a materials simulation verifier. Then the protocol worked for ML accuracy. Then data pipelines. Then risk models. The same 4-step structure handles every computational claim — because the problem is always the same: proof, not trust.
Young's modulus. Thermal conductivity. Multilayer contact. Three claims, one physicist, one problem: simulation results that couldn't be independently verified by any external party.
The same 4-step pipeline — run, index, pack, verify — works for ML benchmarks, system identification, data pipelines, drift monitoring. Any computational claim. Any domain. One command. PASS or FAIL.
Think of it like a certificate of conformity for computation.
When a product gets certified, the certifier doesn’t rebuild it from scratch — they verify it meets the documented spec. That certificate is then trusted by anyone, anywhere, without repeating the test.
MetaGenesis Core does the same for computational results. The bundle is the proof.
And where a physical constant exists — E = 70 GPa for aluminum, measured independently in thousands of labs worldwide — the verification chain is anchored to physical reality, not an internally chosen threshold. This is traceability, not compliance.
Every run produces an evidence bundle with five independent verification layers. Any deviation surfaces as FAIL with a specific reason.
NIST Randomness Beacon timestamp, catches backdated bundlesClone the repo. Run one command. Your result is now cryptographically anchored, semantically verified, and independently auditable by anyone — offline, without trusting you.
Every system using only file hashes is vulnerable. An adversary removes content, recomputes all hashes, and passes integrity checks silently.
job_snapshot from run_artifact.json — stripping the core evidencejob_snapshot present, payload.kind matches, canary_mode correct.job_snapshot missing.tests/steward/test_cert02_pack_includes_evidence_and_semantic_verify.pyThe open demo isn't synthetic. It runs a real physics calibration against a real dataset — then packages everything into a verifiable bundle. Here's exactly what happens, step by step.
backend/progress/mtr1_calibration.py · Dataset: demos/.../strain_stress_open.csv · V&V threshold: MTR-1 rel_err ≤ 0.01Every claim has an implementation, runner dispatch, threshold, and tests. Enforced on every PR — not by human review.
Bidirectional claim coverage checked on every pull request. A claim without implementation blocks merge. An implementation without claim blocks merge.
MetaGenesis Verification Protocol (MVP) v0.5 — open spec for packaging computational claims into independently verifiable evidence bundles.
A minimal, concrete spec for what “independently verifiable” means for any computational result.
pack_manifest + evidence_index + per-claim artifactsAny ML accuracy claim packaged into a tamper-evident bundle. Reviewers verify offline — no model access, no environment, no GPU.
ADMET predictions, PK/PD simulation outputs — packaged with full provenance. FDA 21 CFR Part 11 compatible audit trail.
Carbon sequestration and deforestation models become independently auditable. Corporate buyers verify without proprietary model access.
VaR, credit scoring, stress test outputs packaged for Basel III/IV model risk management.
Young’s modulus, thermal conductivity — verified against physical constants (E = 70 GPa, not an invented threshold), packaged with machine-verifiable proof. Drift detection against the physical anchor included.
ANSYS, FEniCS, OpenFOAM outputs verified against a physically measured anchor — E = 70 GPa for aluminium, measured in thousands of labs worldwide. Not threshold compliance. Traceability to physical reality. Machine-readable proof for engineering certification.
Existing tools solve parts of the problem. MetaGenesis Core is the only open protocol that combines governance enforcement, semantic integrity, and offline third-party verification.
Three major frameworks — EU AI Act, FDA 21 CFR Part 11, Basel III/IV — all require the same thing: independently auditable computational evidence. MetaGenesis bundles satisfy that requirement with a single offline-verifiable artifact.
Article 12 mandates logging of AI system operations enabling post-market monitoring. Annex IV requires technical documentation proving the system functions as intended. MetaGenesis bundles provide immutable, offline-verifiable evidence records without exposing proprietary model internals to regulators.
FDA draft guidance (Jan 2025) establishes a 7-step credibility framework for AI in drug development. Computational claims in IND filings require documentation that a regulator can verify without re-running the model. MetaGenesis provides exactly that: a self-contained bundle, verifiable offline in 60 seconds.
SR 11-7 requires independent validation of risk model outputs by a party that did not build the model. Today that means handing over model code or paying for a full re-run. MetaGenesis bundles give validators verifiable evidence of exactly what the model produced — without touching proprietary code or data.
MetaGenesis Core does not constitute legal or regulatory compliance advice. It provides technical infrastructure that supports compliance documentation workflows.
MetaGenesis Core is open source. The protocol is free to use. We offer a free pilot for your specific use case before any commercial conversation.
The questions we'd ask if we were you.
SECURITY.md and reports/known_faults.yaml.test_cert02. The Step Chain layer catches execution-order tampering — proven by test_cert03. Known limitations are in reports/known_faults.yaml.mg.py) and the protocol specification. Both are open source and auditable. The protocol is designed so any third party can re-implement the verifier independently from the spec and get the same result.One protocol. Five industries. Find your use case.
Your model achieves 94.3% accuracy. Your client wants proof — not a screenshot, not a PDF, not a Docker container they have to run. They want an answer they can verify themselves in 60 seconds.
mg.py verify --pack bundle.zip on their laptopYour computational claim is a number in a PDF. A regulator can’t verify it without recreating everything from scratch. MetaGenesis packages each claim into an artifact any auditor verifies in 60 seconds. Not trust — proof.
SR 11-7 and Basel III/IV require independent validation of risk model outputs. Your VaR models, credit scoring pipelines, and stress test results need a verifiable evidence trail — without handing over your proprietary model to the validator.
Your paper's reviewer cannot reproduce your simulation results without your exact environment, data, and compute. Nature estimates 70%+ of computational results are never independently verified. MetaGenesis makes yours the exception.
Your ANSYS, FEniCS, or OpenFOAM simulation produces displacement results. Your engineering client needs machine-verifiable proof that the output matches physical reference data — not a PDF report, not a screenshot. A certificate any auditor can check offline.
mg.py verify --pack bundle.zip on their machineThis is the exact logic the protocol runs. No backend. No network. The same verification that ships with the codebase.
Share a computational result — any domain. We build a verification bundle for it at no charge. You see PASS or FAIL before any commercial conversation. No strings attached. Response within 48 hours.
Any computational output — ML accuracy, calibration data, pipeline certificate, simulation output.
We implement the verification claim for your domain and generate a tamper-evident evidence bundle.
Run mg.py verify --pack bundle.zip on your machine. See PASS or FAIL. No trust required.
If it solves your problem, we discuss next steps. If it doesn't fit, we tell you honestly.
Clone. Run the demo. See PASS.
No account. No API key. No GPU. No network.