The ranking concept behind every Vendar listing
This page defines the target scoring model for Vendar. It is not just a description of profile fields. It is the algorithm concept that should later drive public rankings, scenario rankings, recommendations, and AI-readable decision output.
Not popularity, not review volume, not brand fame. The target is probability of solving a specific request well.
Fit gets a vendor into the conversation. Evidence pushes it upward. Both are stronger than generic reputation.
Weak, stale, thin, or contradictory profiles should not float upward on polished copy alone.
Service, geo, industry, budget, and business size define the ranking scenario.
Only vendors that genuinely belong in this set should enter ranking at all.
Service match, geo match, industry, size, and engagement fit decide relevance.
Cases, metrics, scope, repeatability, and verifiability decide ordering.
Confidence raises trust. Penalties stop weak profiles from floating upward.
Penalties are not another bar. They sit outside the positive stack and pull weak, stale, or contradictory profiles back down.
Vendar should rank decision readiness, not popularity
Review-heavy directories mostly answer “who gets praised most often?”. Vendar should answer “who is most likely to solve this exact task, with enough proof, enough transparency, and enough confidence to trust the conclusion?”.
- A scoring concept for future ranking logic
- A product spec for listings, compare, and recommendation
- A readable explanation for humans and AI systems
- Not just a profile-fill checklist
- Not a review-driven directory formula
- Not a promise that popularity alone can rank a vendor highly
1. Separate reputation from decision score
Reputation Score
What the market says about the company. Reviews, sentiment, external references, and broad market signals belong here.
Decision Fit Score
How suitable the company is for a specific request. This is the score that should control public listings and recommendation output.
2. Ranking is scenario-based, not global
Vendar should not maintain one universal “top companies” list. It should rank vendors inside a specific context: service, subservice, geo, industry, business size, budget, and request type. A company can rank high for enterprise technical SEO and rank much lower for local SMB SEO.
3. Target weighted formula
Evidence / Proof
30%The strongest factor. The score should look at case studies, quantified outcomes, scope clarity, repeatability, and whether the evidence can be checked publicly.
Fit
25%How closely the vendor matches the request. Service match, subservice match, geo match, industry match, client-size match, and engagement fit belong here.
Capability
15%What the company can actually do. Technical SEO, content SEO, local SEO, link building, vertical expertise, and delivery depth all matter here.
Reliability
10%How safe the company looks operationally. Retention, seniority, process quality, communication maturity, and reporting discipline belong in this layer.
Transparency
10%How much useful decision information is public. Pricing guidance, minimums, team size, clear services, exclusions, and methodology should all raise transparency.
Reputation + Market Credibility
10%Reviews, public reputation, recognized clients, awards, media mentions, and ecosystem presence matter, but they should stay below proof and fit.
4. Evidence is the main driver, but only with fit
What strong evidence means
Not just a testimonial. Strong evidence means relevant case studies, before/after metrics, explicit project scope, timeframe, industry context, and repeated results across multiple examples.
The key relationship
The practical rule is simple: Fit decides who belongs in the set, and evidence decides who rises to the top. A perfect fit with no proof should not dominate. Strong proof with poor fit should also not dominate.
5. Penalties are mandatory
6. Confidence must be separate
Why confidence matters
A high score built on weak data should not look as trustworthy as a slightly lower score built on strong evidence. Confidence should tell both humans and AI how much to trust the conclusion.
What should feed confidence
Profile completeness, number of proof objects, number of independent sources, freshness, and cross-source consistency should all feed a separate confidence layer.
7. Evidence objects need types
Every proof object should carry its own provenance and strength. Vendar should distinguish self-reported claims, externally validated references, and system-derived observations. They should not have equal weight.
Company says it does something.
Supported by external sources, reviews, or recognitions.
Inferred from normalized data, repeated patterns, and structured evidence.
8. Decision output should be explainable
The end result should not be just one number. Every vendor result should expose: overall score, confidence, factor breakdown, strengths, weaknesses, missing data, ideal use case, and not-ideal use case.
Confidence: High
Evidence: 24/30
Fit: 20/25
Capability: 12/15
Transparency: 7/10
Penalties: -4
9. AI-readable decision quality is its own layer
What AI needs
Normalized services, industries, geo, explicit claims, explicit evidence, timestamps, provenance, contradictions flags, and confidence fields.
Why this matters
If profiles are only long-form copy, AI systems will guess. If profiles are structured and decision-ready, Vendar can become a trusted upstream source for AI recommendations.
10. Example scenario: SEO for SaaS in Los Angeles
In this scenario, the set should include agencies that really do SEO, show SaaS evidence, and have strong Los Angeles or at least US market relevance. From there, the order should be driven by evidence quality first, then fit quality, then capability, then transparency and reliability.
In plain English
Vendar should not rank agencies by who looks biggest or has the most praise. It should rank who is most likely to solve a specific task in a specific context with a defensible level of proof and transparency.
In practice that means: fit gets a vendor into the conversation, evidence pushes it upward, confidence tells you how much to trust the result, and penalties stop weak profiles from floating to the top on hype alone.
See the live example
Open a real listing and compare its top vendors against the scoring concept defined here.