Comparison Guide

pgvectorBench vs MonPG HNSW Tuning Lab

pgvectorBench is a lightweight CLI that runs benchmarks locally against your own database. MonPG HNSW Tuning Lab is a hosted service with a web UI, grid search on isolated infrastructure, migration SQL generation, CI integration, and team sharing. The choice is the same one teams make between Matomo and Fathom, or a self-hosted Grafana and a managed observability platform: how much operational overhead are you willing to carry for the same numbers.

Strongest fit

  • You want recall + latency numbers without setting up a Go toolchain, a pgvector sandbox, and a sample-ingestion script.
  • You want the benchmark to run somewhere other than your production database.
  • You need a migration SQL copy + rollback plan for the winning configuration.
  • You want the benchmark to rerun on a schedule and alert on recall drops.
  • You want team members to see each others' benchmark history without shared machines.

Not ideal when

  • You already have a pgvector-enabled local Postgres, the time to write a grid loop, and no need for shared history.
  • Your compliance posture forbids sending samples to any hosted service (our self-hosted runner lands in Wave 3; pgvectorBench is the right choice today).

Decision signals to evaluate

Setup time: pgvectorBench takes 20-45 minutes of Go + Postgres setup; MonPG Tuning Lab is a DB connect + three form fields.
Benchmark compute: pgvectorBench runs on your machine or customer DB; MonPG isolates the load on a worker-owned pgvector cluster.
Migration SQL: pgvectorBench outputs numbers only; MonPG emits CREATE INDEX CONCURRENTLY + rollback + app-role ALTER.
CI integration: not built-in with pgvectorBench; MonPG Team tier exposes the benchmark API with an API key for GitHub Actions / GitLab CI.
Recall drift monitoring: pgvectorBench is one-shot; MonPG Scale tier schedules re-benchmarks and alerts on recall drop > threshold.
Price: pgvectorBench is free; MonPG adds-on is $29/month per org with one benchmark free each month.

Related PostgreSQL pages

FAQ

Can I use both?

Yes. Some teams keep pgvectorBench around for local experimentation during development and use MonPG for scheduled re-benchmarks in production. The methodology is the same; only the execution surface differs.

Does MonPG write to my database?

No. The benchmark worker reads a single SELECT ... LIMIT N to build the sample, then runs every CREATE INDEX / SELECT on a MonPG-owned worker-local pgvector Postgres. The only artifact back to your database is the migration SQL, which you apply yourself on your own deployment window.

Will pgvectorBench's results match MonPG's?

For the same sample, query set, and configuration, yes - the math is identical. Differences come from sample size, query selection, and whether the build runs on a busy shared host or a dedicated worker. MonPG removes the ambient-load variable so results are comparable across runs.

More comparisons