Published ML results are hard to verify without either trusting the reporter or rebuilding their environment from scratch. MetaGenesis Core, an MIT-licensed open-source protocol built by solo inventor Yehor Bazhynov, tries to close that gap. It packages any computational result — ML benchmarks, FEM simulation outputs, data pipeline certificates, calibration runs — into a self-contained, tamper-evident evidence bundle verifiable offline with one command: python scripts/mg.py verify --pack bundle.zip. No GPU, no original model access, no network required. The project currently has 8 active claims across six domains and 107 passing tests. A USPTO provisional patent (#63/996,819) was filed March 5, 2026.
Verification runs on two layers. The first applies SHA-256 cryptographic hashing to catch tampering. The second, semantic layer is designed to catch attacks that survive hash recomputation — specifically, the scenario where a job snapshot is stripped from a run artifact and all hashes are recomputed to restore apparent integrity. MetaGenesis Core ships an adversarial test, test_cert02, that proves this attack vector is caught. For physics and engineering work, a third dimension applies: verification chains are grounded in measured physical constants — aluminum's Young's Modulus at 70 GPa, for instance — so a PASS verdict asserts agreement with physical reality, not just internal consistency.
The eight current claims cover materials testing (Young's Modulus, Thermal Conductivity, Multilayer Contact), system identification, data pipelines, drift monitoring, ML accuracy certification, and digital twin FEM verification. Two design choices distinguish the project from ad hoc audit approaches. Every registered claim must have a corresponding implementation, and that requirement is enforced by static analysis in CI, not by manual review. Known limitations are tracked openly in a known_faults.yaml file in the repository — a transparency mechanism that is unusual for a project at this stage and suggests the author has thought carefully about what the protocol cannot yet catch.
Bazhynov has said he built the project after hours over roughly a year, motivated by the reproducibility crisis across ML research. The specific trigger he cites: a 2023 Science paper by Kapoor and Narayanan that found data leakage in 294 ML papers spanning 17 disciplines. The project targets regulatory frameworks where reproducibility failures carry real legal weight — EU AI Act Article 9, FDA 21 CFR Part 11 for pharmaceutical records, and Basel III/SR 11-7 for financial model validation. Pricing runs from a free pilot tier to a $299 bundle to enterprise licensing.
At 8 claims and 107 tests, MetaGenesis Core is a proof-of-concept rather than an audited production tool. Whether it finds adoption among ML teams or compliance officers will depend on how quickly Bazhynov can expand domain coverage — and whether the regulatory bodies it targets will accept a solo-built, provisional-patent-stage protocol as evidence of compliance.