Skip to content

AWS Marketplace Agent Mode Ranking Audit

Date: March 16, 2026 Last Updated: March 16, 2026 Context: AWS disclosed how their Marketplace Agent Mode ranks partner listings when buyers ask comparison/recommendation questions. This audit maps Vellocity's existing capabilities against those ranking criteria and identifies gaps with implementation recommendations. Status: Audit complete — implementation plan pending


1. What AWS Disclosed (Verbatim Intel)

AWS shared the following about how their Marketplace agents index and rank partner listings for buyer-initiated questions:

Priority Order for Seller Context

  1. Catalog metadata shared with AWS Marketplace (highest priority)
  2. 3P vendor content they licensed (G2, Drata, etc.)
  3. Public information on the partner's website

Buyer Query Patterns

  • 70%+ of customer queries are medium-to-long length
  • Queries express business needs or requirements (not simple keyword searches)
  • AWS recommends including feature specs and use cases in listings and public documentation

Known Ranking Factors

  1. Product title and description accuracy — the #1 cause of poor relevancy is when product descriptions/titles make incorrect claims (e.g., claiming "free trial" when the ISV doesn't offer one via AWS MP)
  2. Public documentation must be up-to-date and maintain robots.txt that allows agents to crawl

Upcoming Changes

  • AWS is working with the AI Listing team to let sellers provide specific evals to influence Agent Mode presence — targeting Q4 2026 release

2. Vellocity Capability Map vs. AWS Criteria

Scoring Legend

  • Strong — Existing capability directly addresses the criterion
  • Partial — Building blocks exist but aren't connected to address this criterion
  • Gap — No current capability; needs to be built
AWS Criterion Vellocity Status Where It Lives Notes
Catalog metadata quality Strong ListingQualityAnalyzer, FieldSeoScoringService, Content Optimizer All 8 required fields scored, field-level AI Assist, version history
Feature specs in listings Strong FieldSeoScoringService (Highlights), ListingQualityAnalyzer (use_cases) Highlights scored for count/quality; use cases weighted at 15 pts
Use cases in listings Strong ListingQualityAnalyzer (use_cases field, 15 pts), differentiation scoring 3+ use cases = 15pt bonus
AI discoverability Strong BedrockListingAnalyzer (AI Visibility proxy), DataForSEO LLM Mentions MSS weights AI Visibility at 35%
Title/description accuracy Gap No cross-validation between listing claims and website/reality
Website ↔ listing coherence Gap LinkCrawler + ListingContentFetcher exist but aren't connected
Public documentation quality Gap ContentReadinessAnalyzer checks internal KBs only, not public docs
robots.txt for agent crawling Gap SitemapController generates own robots.txt; doesn't validate partner sites
3P review presence (G2, Drata) Partial BrandMonitoringCapability references G2/Capterra/Trustpilot Monitors mentions but doesn't surface in listing scoring workflow
Long-tail query matching Partial BedrockListingAnalyzer "Query Match Potential" dimension Scores exist but use case specificity isn't evaluated for query depth

3. Detailed Gap Analysis

GAP 1: Website-to-Listing Coherence Score (CRITICAL)

The Problem: AWS agents cross-reference catalog metadata against public website content. When a listing claims "free trial" but the website doesn't offer one, the listing is penalized. Vellocity has all the building blocks — LinkCrawler crawls websites, ListingContentFetcher extracts listing content, BedrockListingAnalyzer can compare content — but nothing connects them to detect coherence issues.

What AWS Said:

"The cases we saw poor relevancy are when product description/title makes incorrect claims (such as with free trial when the ISV does not offer free trial via AWS MP)"

Impact: This is the #1 cited cause of poor ranking. Fixing this directly addresses AWS's top concern.

Current State: - LinkCrawler (app/Services/Chatbot/LinkCrawler.php) — crawls up to 30 pages per domain - ListingContentFetcher (app/Services/DataForSEO/ListingContentFetcher.php) — 3-tier extraction from AWS MP pages - BedrockListingAnalyzer (app/Services/DataForSEO/BedrockListingAnalyzer.php) — Claude-powered content analysis - ListingQualityAnalyzer (app/Services/DataForSEO/ListingQualityAnalyzer.php) — 4-subscore quality analysis - No service connects website content to listing content for comparison

Where to Build:

Location What to Add
New: WebsiteCoherenceService Orchestrates crawl → compare → score pipeline
ListingQualityAnalyzer New 5th subscore: website_coherence_score
MarketplaceSeoScoringService New MSS component (10% weight, reallocated from LQ/BA)
LaunchReadinessScore (Livewire) New "Website Alignment" check category
Content Optimizer UI Side-by-side: "Listing says X / Website says Y" discrepancy alerts
ComplianceValidatorService Accuracy rules (warnings, not blockers)

Proposed Scoring:

Website Coherence Score (0-100):
  claim_alignment    × 0.40  — Do listing claims match website content?
  feature_coverage   × 0.25  — Are listed features substantiated on website?
  pricing_consistency × 0.20  — Does pricing information align?
  cta_validity       × 0.15  — Are CTAs (free trial, demo) actually available?

High-Value Detections: - "Free trial" in listing but no trial signup on website - Pricing tier in listing doesn't match website pricing page - Integration claims (e.g., "integrates with Salesforce") with no evidence on website - Superlative claims ("#1", "industry-leading") without substantiation


GAP 2: Public Documentation Quality Assessment (HIGH)

The Problem: AWS agents use public documentation as Priority #3 context. Partners with thin or missing public docs are invisible to agent queries. Vellocity's ContentReadinessAnalyzer evaluates internal KBs but doesn't assess the partner's public-facing documentation.

What AWS Said:

"Suggest include feature specs and use cases in listings and their own public documentation"

Impact: 70%+ of queries are medium-to-long business need descriptions. Rich public docs that answer these queries improve agent ranking.

Current State: - ContentReadinessAnalyzerCapability — scores ICP Completeness (40%), Product Clarity (35%), Market Definition (25%), but only against internal knowledge bases - LinkCrawler — can crawl partner websites but doesn't evaluate documentation quality - No assessment of whether public docs cover the features/use cases claimed in listings

Where to Build:

Location What to Add
New: PublicDocumentationAnalyzer Crawl docs site → assess coverage vs. listing claims
BedrockListingAnalyzer New analyzePublicDocumentation() method
ContentReadinessAnalyzerCapability Extend Product Clarity to include public doc assessment
LaunchReadinessScore New check: "Public documentation covers listed features"
MarketplaceSeoScoringService Factor into AI Visibility subscore (docs improve agent discoverability)

Proposed Assessment Dimensions:

Public Documentation Score (0-100):
  existence           × 0.20  — Does a docs site exist and is it reachable?
  feature_coverage    × 0.30  — Do docs cover features claimed in the listing?
  use_case_depth      × 0.25  — Do docs describe use cases with business context?
  structured_content  × 0.15  — Are there integration guides, API refs, tutorials?
  freshness           × 0.10  — Is content recently updated? (check dates, version refs)


GAP 3: robots.txt Validation for Partner Sites (HIGH)

The Problem: AWS explicitly stated partners need "up-to-date robots.txt documentation for agents to collect information from your sites." If a partner's website blocks GPTBot, ClaudeBot, or Amazonbot, their public content is invisible to agent mode — no matter how good it is.

What AWS Said:

"Make sure public documentation are up-to-date and maintains up-to-date robots.txt documentations for agents to collect information from your sites"

Impact: Binary — either agents can crawl or they can't. Blocking agent crawlers = complete invisibility in agent mode. This is the highest ROI fix (low effort, high impact).

Current State: - SitemapController (app/Http/Controllers/Common/SitemapController.php) — generates robots.txt for Vellocity's own site - ListingContentFetcher — fetches partner URLs but doesn't check robots.txt - No validation of whether partner sites allow AI agent crawlers

Where to Build:

Location What to Add
ListingContentFetcher New checkRobotsTxt() method
New: RobotsTxtValidator (lightweight service) Fetch & parse robots.txt, check key user-agents
LaunchReadinessScore Blocking issue: "Your website blocks AI agents"
Marketplace SEO Score UI Red-flag warning banner
ComplianceValidatorService Warning rule for blocked agents

User-Agents to Check:

Amazonbot        — AWS's crawler (highest priority)
GPTBot           — OpenAI/ChatGPT
ClaudeBot        — Anthropic/Claude
Google-Extended  — Google AI (Gemini)
PerplexityBot    — Perplexity AI
CCBot            — Common Crawl (training data source)

Output:

{
  "overall_status": "warning",
  "agents_allowed": ["Googlebot", "Bingbot"],
  "agents_blocked": ["GPTBot", "ClaudeBot"],
  "agents_unknown": ["Amazonbot"],
  "recommendation": "Your robots.txt blocks AI agents. AWS Marketplace Agent Mode cannot access your public documentation. Add explicit Allow rules for Amazonbot, GPTBot, and ClaudeBot.",
  "blocking_for_launch": true
}


GAP 4: Listing Claim Accuracy Verification (MEDIUM-HIGH)

The Problem: Compliance validation checks AWS structural requirements (field lengths, character limits) but doesn't verify whether claims in the listing are factually accurate against the partner's own website.

What AWS Said:

"Ensure product title and descriptions are accurate"

Impact: Inaccurate claims are the single scenario AWS called out as causing poor relevancy.

Current State: - ComplianceValidatorService — validates structural compliance (max lengths, required fields) - ListingQualityAnalyzer — checks copy quality but not factual accuracy - ConversionHygieneScore — detects off-platform CTA spam, but not claim validity

Where to Build: This is largely solved by GAP 1 (Website Coherence). The accuracy verification is a specific check within the coherence score. Additional checks:

Location What to Add
ListingQualityAnalyzer accuracy_flags array in analysis output
ComplianceValidatorService New accuracy warning rules (not blockers)
Content Optimizer UI Inline accuracy warnings on specific fields

Specific Accuracy Checks: - "Free trial" / "Free tier" claim → verify trial exists on website and AWS MP - Pricing claims → cross-reference with website pricing page - Integration claims → check for integration documentation - Certification claims (SOC 2, ISO, HIPAA) → check for compliance page - Customer count / metrics claims → flag unverifiable superlatives


GAP 5: 3P Review Signal Integration (MEDIUM — Not a Deal-Breaker)

The Problem: AWS uses G2, Drata, and similar 3P vendor content as Priority #2 context. Vellocity doesn't have API access to G2 or Drata.

Why It's NOT a Deal-Breaker: - AWS licenses G2/Drata data themselves — they enrich agent context with it directly - Partners can't control what G2 says, but they CAN control their listing and website - Vellocity's job is to ensure partners have 3P profiles and that their content is consistent

Workarounds Available Now:

Approach How
DataForSEO Content Analysis Search for "product name" site:g2.com to detect G2 presence
BrandMonitoringCapability Already references G2, Capterra, Trustpilot, Yelp
Recommendation engine If no G2 presence detected → "AWS agents reference G2 data. Ensure your product has an up-to-date G2 profile."
KB content Add knowledge base entries advising on G2/Drata profile optimization

Where to Surface:

Location What to Add
MarketplaceSeoScoringService New third_party_presence signal in AI Visibility
Marketplace SEO Score UI "3P Review Visibility" section with detected profiles
LaunchReadinessScore Non-blocking recommendation: "No G2 profile detected"
Content Optimizer recommendations "AWS agents use G2 data — claim your profile"

GAP 6: Use Case & Feature Spec Depth for Long-Tail Queries (MEDIUM)

The Problem: 70%+ of buyer queries are medium-to-long with business needs. Vellocity scores use case presence and count, but doesn't evaluate whether use cases are specific enough to match real buyer queries.

What AWS Said:

"70+% customer queries are medium to long length, with business needs or requirements"

Current State: - ListingQualityAnalyzer — differentiation score gives 15pts for 3+ use cases, but only checks count - FieldSeoScoringService — no specificity scoring for use cases - BedrockListingAnalyzer — "Query Match Potential" dimension exists but doesn't test against buyer query patterns - Content Optimizer AI Assist — generates use cases but prompts don't emphasize query-matchability

Where to Build:

Location What to Add
FieldSeoScoringService Use case specificity scoring (word count, problem/solution framing)
ListingQualityAnalyzer Upgrade differentiation scoring to evaluate query-matchability
BedrockListingAnalyzer Enhance AI Visibility prompt to generate + rate example buyer queries
Listing Generator prompts Instruct: "Write use cases as buyer-query-answerable statements"
Content Optimizer UI Show "Example buyer queries this use case would match"

Specificity Scoring Criteria:

Use Case Specificity (per use case):
  length             — Is it >50 chars? (not just "Compliance")
  problem_statement  — Does it describe a problem? ("Reduce...", "Automate...", "Eliminate...")
  audience_signal    — Does it name a role or industry? ("for DevOps teams", "in healthcare")
  outcome_signal     — Does it promise a measurable outcome? ("reduce by 40%", "in minutes")
  query_matchable    — Would a buyer's natural language question match this?


4. Revised MSS Formula (Proposed)

Current Formula

MSS = (LQ × 0.40) + (BA × 0.25) + (AIV × 0.35)

Proposed Formula (Post-Implementation)

MSS = (LQ × 0.30) + (WC × 0.15) + (BA × 0.20) + (AIV × 0.35)

Where:
  LQ  = Listing Quality (reduced from 0.40 → 0.30)
  WC  = Website Coherence (NEW — 0.15)
  BA  = Backlink Authority (reduced from 0.25 → 0.20)
  AIV = AI Visibility (unchanged — 0.35)

Why This Rebalance

  • Website Coherence is the #1 factor AWS cited for poor rankings — it deserves its own weight
  • AI Visibility stays highest because that's the entire point of Agent Mode ranking
  • Backlink Authority is less directly impactful on agent rankings (agents don't follow backlinks)
  • Listing Quality remains important but some of its weight shifts to the more specific WC score

DynamicWeightCalculator Impact

When website data is unavailable (partner hasn't provided URL), the DynamicWeightCalculator already handles this — WC gets unavailable status and its weight redistributes to other components.


5. Implementation Priority Matrix

# Gap Impact Effort Dependencies Priority
1 robots.txt Validation High Low (1-2 days) None P0 — Do first
2 Website-to-Listing Coherence Critical Medium (5-7 days) LinkCrawler, BedrockAnalyzer P0
3 Listing Claim Accuracy Medium-High Low (built into #2) Website Coherence P0 (part of #2)
4 Public Documentation Quality High Medium (3-5 days) LinkCrawler P1
5 Use Case Query Depth Medium Low (2-3 days) None P1
6 3P Review Signals Medium Low (1-2 days) DataForSEO Content Analysis P2

Estimated Total: 12-19 development days


6. Q4 2026 Preparation: Seller-Provided Evals

AWS disclosed they are building a system for sellers to provide specific evals to influence Agent Mode presence, targeting Q4 2026. Vellocity should prepare:

  1. Monitor the API/specification — When AWS publishes the eval format, Vellocity should be first to support it
  2. Pre-build eval content — The website coherence, documentation quality, and use case depth work we're doing now produces exactly the kind of structured content that evals will likely require
  3. Content Optimizer integration — Add an "Agent Mode Eval" export format once the spec is known
  4. Competitive advantage — If Vellocity partners are the first with optimized evals, they get first-mover ranking advantage

7. Relationship to Existing Documents

Document Relationship
docs/aws-3pi-market-analysis.md Competitive landscape & valuation — this audit is about product capabilities
docs/marketplace-listings/ Listing management docs — this audit adds quality criteria
docs/audits/ Audit archive — this is a capability audit
docs/features/ Feature documentation — implementation artifacts go here

Sources

  • AWS Marketplace team direct disclosure (March 2026) — Agent Mode ranking criteria
  • Internal codebase audit of scoring services, analyzers, and content pipeline
  • DataForSEO API documentation (backlinks, LLM mentions, content analysis)
  • AWS Marketplace listing requirements documentation