open source v-.-.-

humancov

One command. Full visibility.

terminal
$ humancov scan

AI-Provenance Scan
==================
Total files scanned:  42
AI-generated:         28
Human-written:        10
Mixed:                2
Unknown (no header):  2

Of AI files:
  Reviewed:  21 / 28  (75%)
  Tested:    8 / 28  (29%)

Three steps. No config.

1

Add provenance headers

Add headers to the top of your files. Works with 30+ languages automatically.

# AI-Provenance-Origin: ai
# AI-Provenance-Reviewed: false
click to copy
2

Scan your repo

Run the command and get a full report. Binaries and ignored paths are skipped automatically.

$ humancov scan click to copy
3

Review and track

Review AI code, set Reviewed: true. Everything updates automatically.

Teach your AI to tag its code.

Run humancov init to detect your AI tool config files and inject provenance instructions automatically.

CLAUDE.md
Claude Code
.cursorrules
Cursor
.windsurfrules
Windsurf
copilot-instructions.md
GitHub Copilot
terminal
$ humancov init

  done: CLAUDE.md (Claude Code instructions added)
  done: .cursorrules (Cursor instructions added)
  skip: .windsurfrules (already has AI-Provenance instructions)

2 file(s) updated.

Ready in seconds.

npm install -g humancov click to copy

Requires Node >= 18. Zero config. Single dependency.

Header keys.

Key Required Values Description
AI-Provenance-Origin yes ai human mixed Who wrote the file
AI-Provenance-Generator no free-text Tool used (claude-code, copilot...)
AI-Provenance-Reviewed yes true false partial Human review status
AI-Provenance-Tested no true false partial Human test status
AI-Provenance-Confidence no high medium low Reviewer confidence
AI-Provenance-Notes no free-text Any context

30+ file types supported.

//
JS, TS, JSX, TSX, Java, C, C++, C#, Go, Rust, Swift, Kotlin, Scala, Dart, PHP, Groovy
#
Python, Ruby, Shell, YAML, TOML, Perl, R, PowerShell, Dockerfile, Terraform, Elixir, Julia
<!--
HTML, XML, SVG, Vue, Svelte, Markdown
/* */
CSS, SCSS, Less
--
SQL, Lua, Haskell, Elm

Show your review coverage.

12% badge
12%
< 25% - needs review
48% badge
48%
25-74% - in progress
82% badge
82%
75-99% - mostly reviewed
100% badge
100%
100% - fully reviewed
terminal
$ humancov scan --badge

https://img.shields.io/badge/human--reviewed-75%25%20of%20AI%20files-green

![Human Reviewed](https://img.shields.io/badge/human--reviewed-75%25...)

Enforce in your pipeline.

# .github/workflows/provenance.yml
name: AI Review Check
on: [push, pull_request]
jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
      - run: npx humancov scan --check 80

Fails your build if not enough AI code has been human-reviewed.

All commands.

terminal
$ humancov scan            # scan repo, print report
$ humancov scan --json     # output as JSON
$ humancov scan --badge    # shields.io badge URL
$ humancov scan --check 80 # CI gate: fail if < 80%
$ humancov manifest        # generate .humancov file
$ humancov init            # add instructions to AI tool configs