Every ranking on Venture Harbour has been through the same process. This page is the manual we follow when we sit down to review a new category.
Would we use it in a portfolio company?
Venture Harbour is an automation venture studio. We operate our own portfolio of automation companies. Every tool we write about is tested against a single question:
Would we run this inside one of our own portfolio companies?
If the answer is yes, it’s considered. If it’s no, we say why.
The testing framework
For each tool we include in a ranked review, we work through the same steps:
-
Real account. We create a real account and actually use the product.
-
Real data. We import or set up with the same kind of data a real customer would.
-
End-to-end use. We use the tool to complete the core job the category exists to do — run an email campaign, close a deal, deploy a site, write some code — not just click through the settings.
-
Compare side-by-side. Every scored tool in a category is tested against the same criteria on the same rubric, by the same reviewer.
The scoring rubric
Each tool gets a rating out of 5 across the dimensions that matter for that category. The weights differ — page builders care about rendering speed; CRMs care about reporting depth — but the consistent ones are:
- Core job performance. Does it do the main thing the category exists to do, well?
- Ease of use. How long before a competent operator is productive?
- Integration surface. Does it connect with the rest of a modern stack, or is it a silo?
- Pricing transparency. Are the numbers on the pricing page what you’ll actually pay, or are there meters hidden behind sales calls?
- Reliability. Outages, data loss, support response times — the things you only find out in week three.
- Support and docs. Do they answer, and is the documentation written for operators rather than marketing?
Scores are public. Rating breakdowns appear inside each review, so you can see exactly where a tool gained or lost points.
Who does the testing
Research is written by the author whose name is on the byline. Testing is done by the author — not by an external agency or freelancer — usually inside a live Venture Harbour venture. Every piece is edited and fact-checked against the live product before it goes live, and scored against the rubric above.
When we revisit a ranking
Categories don’t stand still. We re-test a ranking when any of the following happen:
- A tool in the top three materially changes pricing, positioning, or feature set.
- A new entrant looks like it’s credibly in the top five.
- Our own day-to-day use of the category turns up something we missed.
- Reader feedback points at a factual error (keep them coming — see contact).
The Last updated date on each review reflects the last time we re-tested the category, not a cosmetic refresh.
Affiliate relationships
Some of the tools we rank have affiliate programmes and some don’t. The ranking is written before we know which is which. If the top-ranked tool in a category has no affiliate programme, it still ranks first. For the full policy on how affiliate relationships are disclosed and how they don’t influence ranking, see our editorial policy.
Push back
If you think we got a ranking wrong, or a tool is missing from a review that should be in it, let us know.