Skip to content

Implementation Plan: AWS Agent Mode Ranking Optimization

Date: March 16, 2026 Audit Reference: docs/aws-agent-mode-ranking-audit.md Status: Planning Estimated Total: 12-19 development days across 4 phases


Architecture Principle: Auto-Fallback at Every Layer

Every new service follows the same measured → estimated → unavailable pattern already established by DynamicWeightCalculator and BedrockListingAnalyzer:

Tier 1: Real Data (measured, confidence 1.0×)
  └─ DataForSEO APIs, direct HTTP fetch, Serper scrape

Tier 2: AI Proxy (estimated, confidence 0.7×)
  └─ Bedrock/Claude analysis of available content

Tier 3: Unavailable (weight redistributed)
  └─ Component excluded, DynamicWeightCalculator normalizes remaining weights

Why this matters: Partners without DataForSEO subscriptions still get useful scores. The DynamicWeightCalculator already handles weight normalization — we just need to register new components and their data source tiers.


Phase 0: robots.txt Validation (P0 — 1-2 days)

Why first: Lowest effort, binary impact, no dependencies. If a partner's site blocks AI agents, nothing else matters.

0.1 Create RobotsTxtValidator Service

File: app/Services/Marketplace/RobotsTxtValidator.php

class RobotsTxtValidator
{
    // AI agent user-agents to check (priority order)
    public const AGENT_USER_AGENTS = [
        'Amazonbot'       => 'AWS Marketplace agent crawler',
        'GPTBot'          => 'OpenAI/ChatGPT',
        'ClaudeBot'       => 'Anthropic/Claude (Perplexity, etc.)',
        'Google-Extended' => 'Google AI (Gemini)',
        'PerplexityBot'   => 'Perplexity AI',
        'CCBot'           => 'Common Crawl (LLM training data)',
    ];

    public function validate(string $domain): RobotsTxtResult
    {
        // 1. Fetch {domain}/robots.txt (HTTP, 10s timeout)
        // 2. Parse with spatie/robots-txt (already in composer)
        // 3. Check each AI user-agent: allowed/blocked/not-mentioned
        // 4. Return structured result with recommendations
    }
}

Return DTO: RobotsTxtResult

class RobotsTxtResult
{
    public string $overallStatus;       // 'pass' | 'warning' | 'fail'
    public array $agentsAllowed;        // ['Googlebot', 'Bingbot']
    public array $agentsBlocked;        // ['GPTBot', 'ClaudeBot']
    public array $agentsNotMentioned;   // ['Amazonbot'] — unknown, likely allowed
    public bool $robotsTxtExists;       // false = no robots.txt at all
    public bool $blockingForLaunch;     // true if Amazonbot is explicitly blocked
    public string $recommendation;      // Human-readable fix
    public ?string $rawContent;         // The actual robots.txt content
}

Fallback: If HTTP fetch fails (timeout, DNS error), return overallStatus: 'unknown' with recommendation to check manually. No Bedrock fallback needed — this is a simple HTTP check.

0.2 Integrate into ListingContentFetcher

File: app/Services/DataForSEO/ListingContentFetcher.php

Add to fetchListingContent() return array:

'robots_txt' => $this->robotsTxtValidator->validate($domain),

This piggybacks on the existing fetch pipeline — when we're already hitting the partner's domain, check robots.txt at the same time.

0.3 Integrate into LaunchReadinessScore

File: app/Livewire/Marketplace/LaunchReadinessScore.php

Add new check category: "Agent Crawlability" - Blocking issue if Amazonbot is explicitly blocked - Warning if GPTBot or ClaudeBot are blocked - Pass if no AI agents are blocked or robots.txt allows all

0.4 Surface in Marketplace SEO Score UI

File: resources/views/default/panel/user/marketplace-seo-score/ (existing views)

Add a banner/alert when robots.txt blocks AI agents: - Red alert: "Your website blocks AWS Marketplace agents from crawling. This prevents your public documentation from being indexed for buyer queries." - Include specific fix instructions: "Add User-agent: Amazonbot\nAllow: / to your robots.txt"

0.5 Tests

  • Unit test: Parse known robots.txt patterns (block all, allow all, specific agent blocks)
  • Integration test: Validate against a few known domains
  • Edge cases: No robots.txt (= allow all), malformed robots.txt, timeout

Phase 1: Website-to-Listing Coherence (P0 — 5-7 days)

Why: This is the #1 cause of poor ranking per AWS. Critical path.

1.1 Create WebsiteCoherenceService

File: app/Services/Marketplace/WebsiteCoherenceService.php

This is the orchestrator. It coordinates crawling, comparison, and scoring.

class WebsiteCoherenceService
{
    public function __construct(
        protected LinkCrawler $crawler,
        protected BedrockListingAnalyzer $bedrockAnalyzer,
        protected ?DataForSEOClient $dataForSEOClient = null,
    ) {}

    /**
     * Compare listing content against partner's website.
     *
     * @param array  $listingContent  Normalized listing data
     * @param string $websiteUrl      Partner's product website URL
     * @param array  $context         Optional: domain, category, etc.
     * @return WebsiteCoherenceResult
     */
    public function analyze(
        array $listingContent,
        string $websiteUrl,
        array $context = []
    ): WebsiteCoherenceResult {
        // Step 1: Crawl the website (reuse LinkCrawler, max 15 pages)
        // Step 2: Compare listing claims vs. website content
        // Step 3: Score coherence across 4 dimensions
        // Step 4: Return structured result with discrepancies
    }
}

1.2 Auto-Fallback Design

Tier 1 — Real Data (measured):
  ├─ Serper scrape of partner website (JS-rendered, most accurate)
  ├─ DataForSEO Content Analysis (if subscribed) for claim verification
  └─ Direct HTTP crawl via LinkCrawler (fallback for Serper)

Tier 2 — AI Proxy (estimated):
  └─ Bedrock analyzes listing content ALONE and flags:
      - Claims that are commonly inaccurate ("free trial", pricing specifics)
      - Superlatives without evidence ("#1", "industry-leading")
      - Feature claims that lack specificity (vague enough to be unverifiable)
      (No website crawl — Bedrock scores based on listing content quality only)

Tier 3 — Unavailable:
  └─ No website URL provided → weight redistributed by DynamicWeightCalculator

Key insight for Tier 2: Even without crawling the website, Bedrock can flag likely coherence issues by analyzing the listing content for patterns that commonly fail accuracy checks. This gives partners useful feedback even when their website can't be reached.

1.3 WebsiteCoherenceResult DTO

File: app/Services/Marketplace/DTO/WebsiteCoherenceResult.php

class WebsiteCoherenceResult
{
    public int $score;                    // 0-100 composite
    public string $dataSource;            // 'measured' | 'estimated' | 'unavailable'

    // Subscore breakdown
    public int $claimAlignmentScore;      // 0-100: Do claims match website?
    public int $featureCoverageScore;     // 0-100: Are features substantiated?
    public int $pricingConsistencyScore;  // 0-100: Does pricing align?
    public int $ctaValidityScore;         // 0-100: Are CTAs actually available?

    // Discrepancies found
    public array $discrepancies;          // [{field, listing_claim, website_evidence, severity}]
    public array $unverifiableClaims;     // Claims that couldn't be checked
    public array $recommendations;        // Actionable fixes

    // Metadata
    public int $pagesAnalyzed;
    public ?string $websiteUrl;
    public string $analyzedAt;
}

1.4 Bedrock Coherence Prompt (Tier 1 — with website data)

Add to BedrockListingAnalyzer:

protected function analyzeWebsiteCoherence(
    array $listingContent,
    array $websiteContent,
    array $context
): array {
    $prompt = <<<PROMPT
    You are an AWS Marketplace listing accuracy auditor. Compare this
    listing's claims against the vendor's website content and identify
    discrepancies that would cause AWS's Agent Mode to rank this listing
    poorly.

    LISTING CONTENT:
    {listing summary}

    WEBSITE CONTENT:
    {crawled website summary}

    Score these dimensions (0-25 each):
    1. Claim Alignment — Do listing claims match website content?
    2. Feature Coverage — Are listed features substantiated on the website?
    3. Pricing Consistency — Does pricing information align?
    4. CTA Validity — Are CTAs (free trial, demo, contact) actually available?

    For each discrepancy found, return:
    - field: which listing field
    - listing_claim: what the listing says
    - website_evidence: what the website says (or "not found")
    - severity: critical | warning | info
    PROMPT;
}

1.5 Bedrock Coherence Prompt (Tier 2 — listing-only fallback)

When website crawl fails or no URL provided:

protected function analyzeCoherenceFromListingOnly(
    array $listingContent,
    array $context
): array {
    $prompt = <<<PROMPT
    You are an AWS Marketplace listing accuracy auditor. Analyze this
    listing for claims that are COMMONLY inaccurate based on patterns
    you've seen across marketplace listings. You do NOT have access to
    the vendor's website — score based on claim plausibility only.

    LISTING CONTENT:
    {listing summary}

    Flag:
    1. "Free trial" or "free tier" claims — these frequently don't exist on AWS MP
    2. Specific pricing claims — verify they're structured, not vague
    3. Superlative claims ("#1", "leading", "best") — flag as unverifiable
    4. Integration claims — are they specific ("integrates with Salesforce CRM
       via REST API") or vague ("integrates with popular tools")?
    5. Compliance claims (SOC 2, HIPAA, ISO) — flag if no detail provided

    Return a risk score (0-100 where 100 = very likely accurate) and
    specific flags for each concern.
    PROMPT;
}

1.6 Update DynamicWeightCalculator

File: app/Services/DataForSEO/DynamicWeightCalculator.php

Add website_coherence to IDEAL_WEIGHTS:

public const IDEAL_WEIGHTS = [
    'listing_quality'     => 0.30,  // reduced from 0.40
    'website_coherence'   => 0.15,  // NEW
    'backlink_authority'  => 0.20,  // reduced from 0.25
    'ai_visibility'       => 0.35,  // unchanged
];

The existing normalization logic handles this automatically — when website_coherence has unavailable source, its weight distributes to the other three components.

1.7 Update MarketplaceSeoScoringService

File: app/Services/DataForSEO/MarketplaceSeoScoringService.php

  • Add WebsiteCoherenceService as a dependency
  • Call it during calculateScoreData() when a website URL is available
  • Pass result to DynamicWeightCalculator with appropriate data source tier
  • Store coherence result in MarketplaceSeoScore model (new columns)

1.8 Update MarketplaceSeoScore Model

File: app/Models/MarketplaceSeoScore.php

New fields (migration):

$table->unsignedTinyInteger('website_coherence')->nullable();
$table->json('website_coherence_breakdown')->nullable();
$table->json('website_discrepancies')->nullable();
$table->string('website_coherence_source', 20)->nullable(); // measured|estimated|unavailable

New weakness codes:

'LISTING_WEBSITE_MISMATCH',
'UNVERIFIABLE_CLAIMS',
'FREE_TRIAL_CLAIM_UNCONFIRMED',
'PRICING_MISMATCH',
'BLOCKED_AI_AGENTS',

1.9 Update Content Optimizer UI

File: resources/views/default/panel/user/content-optimizer/show.blade.php

Add a "Website Alignment" panel in the score sidebar: - Show coherence score alongside existing field scores - List specific discrepancies with severity badges - "Listing says: Free trial available" / "Website: No trial signup found" format - Actionable fix for each discrepancy

1.10 Tests

  • Unit: WebsiteCoherenceService with mocked crawl data
  • Unit: Bedrock prompt parsing for coherence results
  • Unit: DynamicWeightCalculator with 4 components (new weight distribution)
  • Integration: End-to-end scoring with website coherence included
  • Fallback: Verify Tier 2 kicks in when website crawl fails
  • Edge cases: No website URL, unreachable site, redirect chains

Phase 2: Public Documentation Quality + Use Case Depth (P1 — 5-8 days)

2.1 Create PublicDocumentationAnalyzer

File: app/Services/Marketplace/PublicDocumentationAnalyzer.php

class PublicDocumentationAnalyzer
{
    public function __construct(
        protected LinkCrawler $crawler,
        protected BedrockListingAnalyzer $bedrockAnalyzer,
    ) {}

    /**
     * Assess quality of a partner's public documentation.
     */
    public function analyze(
        string $websiteUrl,
        array $listingContent,
        array $context = []
    ): PublicDocumentationResult {
        // Step 1: Discover docs (try /docs, /documentation, /help, sitemap.xml)
        // Step 2: Crawl discovered docs pages (max 20)
        // Step 3: Assess coverage against listing claims
        // Step 4: Score documentation quality
    }
}

2.2 Auto-Fallback Design

Tier 1 — Real Data (measured):
  ├─ Crawl docs site directly (discover /docs, /help, /documentation paths)
  ├─ Parse sitemap.xml for docs section
  └─ DataForSEO On-Page API for structure analysis (if subscribed)

Tier 2 — AI Proxy (estimated):
  └─ Bedrock analyzes listing content and estimates whether:
      - Features described are complex enough to require documentation
      - Technical depth suggests docs should exist
      - Product type typically has public docs (SaaS = yes, services = maybe)
      Returns a "documentation likelihood" score and recommendations

Tier 3 — Unavailable:
  └─ No website URL → skip documentation assessment entirely

2.3 Documentation Discovery Heuristic

Before crawling, discover where docs live:

protected function discoverDocsUrl(string $baseUrl): ?string
{
    $candidates = [
        '/docs',
        '/documentation',
        '/help',
        '/support',
        '/resources',
        '/developer',
        '/api',
        '/guides',
    ];

    // Also check: subdomain patterns (docs.example.com, help.example.com)
    // Also check: sitemap.xml for /docs/ paths
    // Return first reachable URL, or null
}

2.4 PublicDocumentationResult DTO

class PublicDocumentationResult
{
    public int $score;                    // 0-100
    public string $dataSource;            // measured | estimated | unavailable

    public int $existenceScore;           // 0-100: Does a docs site exist?
    public int $featureCoverageScore;     // 0-100: Do docs cover listing features?
    public int $useCaseDepthScore;        // 0-100: Business context in docs?
    public int $structuredContentScore;   // 0-100: Guides, API refs, tutorials?
    public int $freshnessScore;           // 0-100: Recently updated?

    public ?string $docsUrl;              // Discovered docs URL
    public int $pagesAnalyzed;
    public array $coveredFeatures;        // Features found in docs
    public array $missingFeatures;        // Listed features not in docs
    public array $recommendations;
}

2.5 Integration Points

Location Change
LaunchReadinessScore New category: "Public Documentation" with feature coverage checks
BedrockListingAnalyzer New analyzePublicDocumentation() method
MarketplaceSeoScoringService Factor doc quality into AI Visibility subscore (docs improve agent discoverability)
ContentReadinessAnalyzerCapability Extend "Product Clarity" (35%) to include public doc assessment alongside internal KB

2.6 Upgrade Use Case Specificity Scoring

File: app/Services/DataForSEO/ListingQualityAnalyzer.php

Update calculateDifferentiationScore():

// Current: just count use cases
if (count($useCases) >= 3) { $score += 15; }

// New: score each use case for specificity
foreach ($useCases as $useCase) {
    $specificity = $this->scoreUseCaseSpecificity($useCase);
    // Bonus for specific, query-matchable use cases
    // Penalty for vague one-word use cases
}

Specificity criteria per use case:

protected function scoreUseCaseSpecificity(string $useCase): int
{
    $score = 0;
    $lower = strtolower($useCase);

    // Length check: >50 chars = specific, <20 = too vague
    if (mb_strlen($useCase) > 50) $score += 20;
    elseif (mb_strlen($useCase) > 30) $score += 10;

    // Problem statement: starts with action verb
    $problemVerbs = ['reduce', 'automate', 'eliminate', 'streamline',
                     'accelerate', 'simplify', 'monitor', 'detect',
                     'prevent', 'optimize', 'scale', 'secure'];
    foreach ($problemVerbs as $verb) {
        if (str_starts_with($lower, $verb)) { $score += 20; break; }
    }

    // Audience signal: names a role or industry
    $audienceSignals = ['team', 'engineer', 'developer', 'analyst',
                        'healthcare', 'fintech', 'enterprise', 'startup',
                        'devops', 'security', 'compliance', 'marketing'];
    foreach ($audienceSignals as $signal) {
        if (str_contains($lower, $signal)) { $score += 20; break; }
    }

    // Outcome signal: measurable result
    if (preg_match('/\d+%|\d+x|minutes|hours|seconds|faster|reduction/', $lower)) {
        $score += 20;
    }

    // Query matchability: contains "for" or "when" framing
    if (preg_match('/\bfor\b|\bwhen\b|\bif you\b|\bneed to\b/', $lower)) {
        $score += 20;
    }

    return min(100, $score);
}

2.7 Update AI Assist Prompts for Use Cases

File: app/CustomExtensions/CloudMarketplace/System/Services/ListingGeneratorService.php

When generating use cases, update the field-specific guidance:

Write use cases as buyer-query-answerable statements. Each use case should:
1. Start with an action verb (Automate, Reduce, Monitor, Secure, Scale)
2. Name a specific audience or industry when relevant
3. Describe a measurable outcome when possible
4. Be specific enough that a buyer searching "I need a tool that..." would match

Example of GOOD use case:
"Automate SOC 2 compliance reporting for mid-market SaaS companies, reducing audit prep time by 60%"

Example of BAD use case:
"Compliance"

2.8 Tests

  • Unit: Documentation discovery heuristic
  • Unit: Use case specificity scoring with known good/bad examples
  • Integration: Documentation analyzer with mocked crawl
  • Fallback: Tier 2 documentation estimation without website access

Phase 3: 3P Review Signals (P2 — 1-2 days)

3.1 Create ThirdPartyPresenceChecker

File: app/Services/Marketplace/ThirdPartyPresenceChecker.php

class ThirdPartyPresenceChecker
{
    // Review platforms AWS is known to use
    public const REVIEW_PLATFORMS = [
        'g2'          => ['domain' => 'g2.com',          'label' => 'G2'],
        'capterra'    => ['domain' => 'capterra.com',    'label' => 'Capterra'],
        'trustradius' => ['domain' => 'trustradius.com', 'label' => 'TrustRadius'],
        'gartner'     => ['domain' => 'gartner.com',     'label' => 'Gartner Peer Insights'],
    ];

    public function check(string $productName, string $vendorName): ThirdPartyPresenceResult
    {
        // Tier 1: DataForSEO Content Analysis
        //   Search: "{productName}" site:g2.com
        //   Search: "{productName}" site:trustradius.com
        //   etc.
        //
        // Tier 2 (DataForSEO unavailable): Serper search
        //   Same queries via Serper API
        //
        // Tier 3: Return 'unknown' with recommendation to check manually
    }
}

3.2 ThirdPartyPresenceResult DTO

class ThirdPartyPresenceResult
{
    public int $score;                    // 0-100 (presence coverage)
    public string $dataSource;            // measured | estimated | unavailable
    public array $platformResults;        // [{platform, found, url, snippet}]
    public int $platformsFound;           // Count of platforms with presence
    public int $platformsChecked;         // Total checked
    public array $recommendations;        // "Create a G2 profile to improve..."
}

3.3 Integration Points

Location Change
MarketplaceSeoScoringService Add third_party_signals to AI Visibility subscore
BedrockListingAnalyzer Include 3P presence in analyzeAiVisibility() context
Marketplace SEO Score UI "3P Review Visibility" section showing detected platforms
LaunchReadinessScore Non-blocking recommendation if no G2 profile detected
Content Optimizer recommendations "AWS agents use G2 data — claim your profile at g2.com"

3.4 Tests

  • Unit: DataForSEO content search mocking
  • Unit: Serper fallback search
  • Edge cases: Product name with special characters, ambiguous names

Phase 4: MSS Formula Rebalance + UI Updates (runs parallel to Phase 1-3)

4.1 Migration for New Score Columns

Schema::table('marketplace_seo_scores', function (Blueprint $table) {
    // Website coherence
    $table->unsignedTinyInteger('website_coherence')->nullable()->after('ai_visibility');
    $table->json('website_coherence_breakdown')->nullable();
    $table->json('website_discrepancies')->nullable();
    $table->string('website_coherence_source', 20)->nullable();

    // robots.txt
    $table->string('robots_txt_status', 20)->nullable(); // pass|warning|fail|unknown
    $table->json('robots_txt_details')->nullable();

    // Documentation quality
    $table->unsignedTinyInteger('documentation_quality')->nullable();
    $table->string('documentation_quality_source', 20)->nullable();

    // 3P presence
    $table->unsignedTinyInteger('third_party_presence')->nullable();
    $table->json('third_party_platforms')->nullable();

    // Use case specificity (field-level detail)
    $table->json('use_case_specificity_scores')->nullable();
});

4.2 Update MarketplaceSeoScore Model

Add new attributes, casts, and weakness codes for all new dimensions.

4.3 UI Updates

Marketplace SEO Score Dashboard: - Add "Website Alignment" card showing coherence score + discrepancies - Add "Agent Crawlability" indicator (robots.txt pass/fail) - Add "3P Review Visibility" section - Add "Documentation Quality" indicator

Content Optimizer: - Side-by-side comparison panel for discrepancies - Use case specificity indicators per use case field - "Example buyer queries this would match" for each use case

Launch Readiness: - New check categories: Agent Crawlability, Website Alignment, Documentation Coverage - Blocking vs. warning classification for each


Rollout Strategy

Feature Flags

Each phase ships behind a feature flag so it can be tested independently:

'agent_mode_robots_check'     => false,  // Phase 0
'agent_mode_website_coherence' => false,  // Phase 1
'agent_mode_doc_quality'       => false,  // Phase 2
'agent_mode_3p_presence'       => false,  // Phase 3
'agent_mode_mss_rebalance'     => false,  // Phase 4 (weights change)

Rollout Order

  1. Phase 0 → enable agent_mode_robots_check → monitor for 1 week
  2. Phase 1 → enable agent_mode_website_coherence → monitor for 1 week
  3. Phase 4 → enable agent_mode_mss_rebalance (depends on Phase 1 data)
  4. Phase 2 → enable agent_mode_doc_quality
  5. Phase 3 → enable agent_mode_3p_presence

Backwards Compatibility

When feature flags are off, the existing MSS formula (LQ 40%, BA 25%, AIV 35%) remains unchanged. The DynamicWeightCalculator naturally handles this — new components simply have unavailable source and their weight redistributes.


Q4 2026 Preparation

AWS is building seller-provided evals for Agent Mode influence, targeting Q4 2026.

Action items: 1. Monitor AWS docs/blog for eval spec announcements 2. Pre-build structured content — the coherence, documentation, and use case work produces exactly the structured data evals will likely require 3. Reserve AgentModeEvalExporter service — once spec is known, export optimized content in eval format 4. Content Optimizer — add "Agent Mode Eval" export action (spec-dependent)


File Index

New Files to Create

File Phase Purpose
app/Services/Marketplace/RobotsTxtValidator.php 0 Validate partner robots.txt for AI agent access
app/Services/Marketplace/DTO/RobotsTxtResult.php 0 robots.txt validation result
app/Services/Marketplace/WebsiteCoherenceService.php 1 Orchestrate website-to-listing comparison
app/Services/Marketplace/DTO/WebsiteCoherenceResult.php 1 Coherence analysis result
app/Services/Marketplace/PublicDocumentationAnalyzer.php 2 Assess public documentation quality
app/Services/Marketplace/DTO/PublicDocumentationResult.php 2 Documentation analysis result
app/Services/Marketplace/ThirdPartyPresenceChecker.php 3 Detect G2/Capterra/TrustRadius presence
app/Services/Marketplace/DTO/ThirdPartyPresenceResult.php 3 3P presence result
database/migrations/xxxx_add_agent_mode_columns_to_marketplace_seo_scores.php 4 New score columns

Existing Files to Modify

File Phase Changes
app/Services/DataForSEO/ListingContentFetcher.php 0 Add robots.txt check to fetch pipeline
app/Livewire/Marketplace/LaunchReadinessScore.php 0,1,2 New check categories
app/Services/DataForSEO/DynamicWeightCalculator.php 1 Add website_coherence to IDEAL_WEIGHTS
app/Services/DataForSEO/MarketplaceSeoScoringService.php 1 Integrate coherence into MSS pipeline
app/Services/DataForSEO/BedrockListingAnalyzer.php 1,2 New coherence + docs analysis methods
app/Services/DataForSEO/ListingQualityAnalyzer.php 2 Upgrade use case specificity scoring
app/Services/Optimizer/FieldSeoScoringService.php 2 Use case depth scoring
app/CustomExtensions/CloudMarketplace/System/Services/ListingGeneratorService.php 2 Updated use case generation prompts
app/Models/MarketplaceSeoScore.php 4 New columns, casts, weakness codes
app/Services/Marketplace/ComplianceValidatorService.php 1 Accuracy warning rules
resources/views/default/panel/user/content-optimizer/show.blade.php 1 Website alignment panel
resources/views/default/panel/user/marketplace-seo-score/ 0,1,3 New score sections