Partner Review Improvements: MCP-First Workflow Analysis¶
Context¶
This analysis evaluates patterns from a real-world MCP-powered content/SEO/lead workflow against Vellocity's current partner user experience. The source operator uses Claude + MCPs (Supabase, Google Search Console, GA4, Gmail, web search) in a single conversation to manage keyword research, content creation, SEO optimization, lead management, and analytics -- all with human-in-the-loop approval.
The goal is to identify concrete improvements for Vellocity's partner users, particularly around the review experience, content feedback loops, and analytics-driven optimization.
Pattern-by-Pattern Comparison¶
1. Side-by-Side Diff Review for Content Changes¶
Write-up pattern: The operator built a UI where content edits show a side-by-side view with red (removed) and green (added) highlighting. The reviewer cannot approve or delete directly -- they see exactly what changed before approving.
Vellocity current state: The SharedAsset model (app/Extensions/ContentManager/System/Models/SharedAsset.php) tracks approval statuses per partner and stores content as a JSON blob. The ApprovalWorkflowService supports approve/reject/request-changes flows. However, the plan detail view (cosell/plans/show.blade.php) only shows a flat list of shared assets with name and type -- no content preview, no diff view, no inline review capability.
Gap: Partners reviewing co-sell content have no way to see what actually changed between revisions. The resetApprovals() method (line 332 of SharedAsset.php) resets approval status when content is updated, but there's no version history or diff to show what was updated.
Recommendation:
- Add a content_versions JSON column (or a SharedAssetVersion model) to track content revisions
- Build a diff view component that renders side-by-side or inline diffs of the content JSON blob (text fields specifically)
- Surface this in the approval UI so partners see exactly what changed before approving
- Priority: High -- this directly impacts the partner review experience quality
2. Content Quality Guardrails (Style Guide Enforcement)¶
Write-up pattern: The operator uses a 2000-word BLOG_WRITING_GUIDE.md that Claude reads before every post. Rules include: no em dashes (instant AI tell), questions as H2s for AI discoverability, strict formatting requirements. Claude follows these rules consistently because the guide is loaded at conversation start.
Vellocity current state: The BrandVoiceMerger service (CoSell/BrandVoiceMerger.php) merges tone-of-voice and target audience between two partners, and generates a formatted prompt context string via formatForPrompt(). The ContentReadinessAnalyzerCapability scores ICP completeness, product clarity, and market definition. The EnrichBrandVoiceCapability exists for brand enrichment.
However, there is no concept of a content style guide (writing rules, formatting standards, AI-tell avoidance rules) that gets enforced during content generation or validated during review.
Gap: Brand voice (tone, audience) is not the same as a writing style guide (structural rules, formatting standards, anti-AI-slop detection). Partners generate co-sell content through the agent, but there are no guardrails for: - Content structure rules (H2 format, paragraph length, CTA placement) - AI-tell detection (em dashes, generic phrases, repetitive patterns) - Data citation requirements (claims must have sources) - Visual density requirements (minimum charts/visuals per asset)
Recommendation:
- Add a content_style_guide field to the Partner or Company model -- a structured JSON or markdown field defining writing rules
- Extend BrandVoiceMerger.buildJointGuidelines() to incorporate style guides from both partners (merge structural rules, take the stricter standard for each)
- Add a post-generation validation step that checks content against style guide rules before submitting for approval
- Surface style guide compliance as a score alongside the content readiness score
- Priority: Medium-High -- directly improves content quality for partners without manual enforcement
3. SEO Feedback Loop (Impressions Without Clicks -> Title/Description Updates)¶
Write-up pattern: The operator checks Google Search Console for pages getting impressions but no clicks, then asks Claude to propose title/description changes. The result on day 9 was a visible spike in impressions after updating what wasn't working. This is a tight feedback loop: measure -> identify underperformers -> optimize -> re-measure.
Vellocity current state: There are SEO-related widgets:
- seo-intelligence.blade.php -- SEO score analysis form
- marketplace-seo.blade.php -- Average SEO score, discoverability, persuasiveness
- LaunchReadinessScore.php -- SEO optimization category in launch readiness
- ContentReadinessAnalyzerCapability scores target keywords
But these are static analysis tools -- they analyze content at a point in time. There is no feedback loop that connects published content performance back to optimization suggestions.
Gap: Published co-sell content (blog posts, landing pages) goes out via CrossPartnerPublishingService, but there's no mechanism to:
1. Track how published content performs in search (impressions, clicks, CTR, position)
2. Identify underperforming content (high impressions, low clicks = bad title/meta)
3. Surface AI-generated optimization suggestions back to partners
4. Track whether optimizations improved performance
Recommendation:
- Add a ContentPerformanceTracker service that periodically pulls search performance data for published SharedAssets
- Create an "Optimization Suggestions" capability that analyzes published content against its performance data and proposes title/meta/structural changes
- Build a dashboard widget showing "Content needing attention" (high impressions, low CTR)
- When optimization is suggested, create a new version of the SharedAsset with changes highlighted (ties back to improvement #1)
- Priority: High -- this is the single biggest ROI pattern from the write-up. The operator's breakthrough moment was explicitly this feedback loop
4. Lead Journey Attribution (How Did the Lead Find Us?)¶
Write-up pattern: Using GA4, the operator gets the exact journey: Google search -> blog post -> homepage -> ROI calculator -> contact form. This context is available to Claude before drafting a response, so outreach is informed by what the lead already saw and engaged with.
Vellocity current state: The AuditDashboardWidget (628 lines) tracks:
- Marketplace ROI
- Lead funnel (MQL -> SQL -> Opportunity -> Closed)
- Content strategist influence metrics
- Revenue attribution by source
The CoSellAnalyticsService tracks engagement metrics per plan (impressions, clicks, shares by channel). The DealInfluenceTrackingCapability exists for content attribution.
However, the analytics are aggregate -- they show totals per campaign or per channel. There is no individual lead journey tracking that shows "Lead X came from Google, read blog post Y, then visited the listing page, then contacted us."
Gap: Partners cannot see the path a specific lead took through their co-sell content. The ContentAttributionController and content tagging system exist but are oriented toward aggregate reporting, not individual lead journeys.
Recommendation:
- Extend the UrlTrackingCodeGenerator (327 lines) to generate UTM-tagged URLs for each SharedAsset publication
- Add a LeadJourney model that tracks touchpoints per lead (page visited, content consumed, time spent)
- Surface lead journey data in the co-sell analytics dashboard so partners can see: "This lead found your joint blog post via Google, spent 4 minutes reading, then navigated to the listing page"
- Feed journey context into the agent when drafting outreach, so the AI can reference what the lead already knows
- Priority: Medium -- valuable but depends on having GA4/analytics integrations connected
5. Unified Context (All Tools in One Conversation)¶
Write-up pattern: The operator's key insight: "Claude talks to all of these in one conversation. No context switching. I literally ask him what leads are still waiting for a response then immediately after ask it to send them an email." The power is in unified context -- keyword DB, search console, analytics, CRM, email all accessible from one Claude session.
Vellocity current state: The agent architecture has 10+ capabilities (CoSellMatching, JointGTMPlanner, PartnerIntelligence, LinkedInGraph, DealInfluenceTracking, PublishToSocialMedia, MarketplaceAwareness, ContentReadinessAnalyzer). These are well-structured as individual capabilities that can be composed.
However, these capabilities are invoked through the agent wizard flow (CoSellWizardController), which is a step-by-step UI. There's no indication of a unified conversational interface where a partner can seamlessly flow between "check my analytics" -> "which content is underperforming?" -> "suggest improvements" -> "apply changes" -> "submit for partner approval" in a single conversation.
Gap: The capabilities exist in isolation. A partner can run an agent to generate content, separately check analytics, separately review approvals. But the unified context that makes the MCP pattern powerful is missing.
Recommendation: - Consider surfacing the agent capabilities through a chat-style interface where partners can invoke any capability conversationally - Enable capability chaining: the output of one capability (e.g., analytics showing low CTR on a blog post) can be directly fed as input to another (content optimization) - Allow the agent to maintain conversation context across capability invocations within a session - Priority: Medium -- architectural improvement that amplifies all other capabilities
6. Analytics That Are Real, Not Simulated¶
Write-up pattern: The operator connects to real Google Search Console, real GA4, real Supabase data. Every metric is actual.
Vellocity current state: The CoSellAnalyticsService has multiple methods that return rand() values:
- getEngagementMetrics() (line 98-122): rand(5000, 50000) for impressions, rand(500, 5000) for engagements
- getPipelineMetrics() (line 130-144): rand(20, 100) for leads, rand(100000, 1000000) for pipeline value
- getPartnerContribution() (line 183-201): rand(40, 60) for engagement contribution
- getPartnerPerformance() (line 315-323): Multiple rand() calls for pipeline, ROI, approval time
The CrossPartnerPublishingService has PRODUCTION TODO comments for all publishing channels (LinkedIn, blog, email, Twitter) indicating simulated responses.
Gap: This is a significant credibility issue. Partners seeing random numbers in their analytics dashboard will lose trust in the platform. Even demo/staging environments should use realistic seeded data, not rand().
Recommendation:
- Replace all rand() calls with either:
- Actual integration data (pull from connected services)
- Deterministic sample data based on plan/relationship IDs (for demo mode)
- Null/empty states with clear "Connect [service] to see real data" CTAs
- Implement the actual integration layer for at least one channel (LinkedIn API is the most impactful for B2B co-sell content)
- Add data freshness indicators so partners know when metrics were last updated
- Priority: Critical -- random numbers actively undermine platform trust
7. Approval-Only Publishing (Never Auto-Publish Without Review)¶
Write-up pattern: Everything is a draft. The operator always reviews, usually asking for more charts and verifying formatting rules before approving. Nothing goes live without explicit human approval.
Vellocity current state: The ApprovalWorkflowService has autoPublishReadyAssets() (line 323-348) which publishes assets that are both-approved AND past their publish date. The approve() method (line 45-65) automatically sets publish_date to tomorrow when both partners approve:
if ($asset->isBothApproved() && !$asset->publish_date) {
$asset->update([
'publish_date' => now()->addDays(1), // Default: publish tomorrow
]);
}
Gap: The auto-scheduling behavior after dual approval may surprise partners. The write-up emphasizes that even after both partners approve content quality, there should be an explicit "Publish Now" or "Schedule for [date]" action -- not an automatic "publish tomorrow" default.
Recommendation:
- Remove the automatic publish_date assignment from the approve() method
- After both partners approve, show a "Ready to Publish" state with explicit scheduling options
- Add a "Publish Confirmation" step requiring one partner to explicitly schedule or publish
- Keep autoPublishReadyAssets() for assets that have been explicitly scheduled
- Priority: Medium -- important for partner trust but the current behavior isn't catastrophic since it requires dual approval first
8. Notification When Content Needs Review¶
Write-up pattern: The operator gets notified immediately when a new lead comes in (via n8n + NTFY), and Claude is ready with context when they open the conversation.
Vellocity current state: Notifications exist (ExecutionStuckNotification, ExecutionCompleteNotification) and partnership emails are sent (PartnershipInvitationEmail, follow-ups). However, there are no notifications for content review events:
- Partner A submits content for approval -> Partner B gets no notification
- Partner B requests changes -> Partner A gets no notification
- Both partners approve -> No notification that content is ready to schedule
- Content is published -> No notification to either partner
Gap: The approval workflow has no event-driven notifications. Partners have to manually check the dashboard to discover pending reviews.
Recommendation:
- Add ContentReviewRequestedNotification -- sent to the other partner when content is submitted for review
- Add ContentChangesRequestedNotification -- sent when a partner requests changes
- Add ContentApprovedNotification -- sent when both partners have approved
- Add ContentPublishedNotification -- sent when content goes live
- Support both in-app and email notification channels
- Priority: High -- without this, the approval workflow has too much friction (partners forget to check)
9. AI Discoverability Optimization¶
Write-up pattern: The operator optimizes for AI crawls (ChatGPT, Perplexity, GPTBot) by using questions as H2 headings. This resulted in 155 AI crawls and 50+ clicks from ChatGPT users and 20 from Perplexity users -- a new traffic channel that most SEO tools ignore.
Vellocity current state: The SEO tools focus on traditional search optimization (keywords, backlinks, listing compliance). The MarketplaceAwarenessCapability monitors competitor positioning. There is no concept of AI search optimization.
Gap: For AWS ISV partners, being recommended by ChatGPT or Perplexity when enterprise buyers ask "What's the best [category] solution on AWS Marketplace?" would be extremely valuable. This is an emerging channel that Vellocity could differentiate on.
Recommendation: - Add an "AI Discoverability Score" alongside the existing SEO score - Content generation should include AI-friendly formatting: questions as H2s, structured data, clear answer paragraphs - Track AI crawler visits to published co-sell content (detect GPTBot, PerplexityBot, ClaudeBot in analytics) - Add a content style rule for "AI search optimization" to the style guide system (improvement #2) - Priority: Low-Medium -- differentiating but not critical for current partner workflows
10. Data Visualization in Content (Charts Not Stock Images)¶
Write-up pattern: Posts include 8-12 visuals rendered as animated React charts from JSON data. No generic AI images. Every visual communicates actual data.
Vellocity current state: The MediaManagerModal (1492 lines) handles media uploads and AI image generation. Content generation produces text-based content stored in JSON. There is no chart/visualization generation capability.
Gap: Co-sell content (blog posts, case studies, sales decks) would be significantly more compelling with data visualizations. AWS partners have metrics (cost savings, performance improvements, migration timelines) that render well as charts.
Recommendation:
- Add a chart generation capability that takes structured data and produces embeddable chart JSON (compatible with Chart.js, Recharts, or similar)
- Include chart generation as part of the content generation workflow for data-heavy assets (case studies, blog posts with metrics)
- Store chart configurations in the SharedAsset content JSON alongside text
- Priority: Low -- nice to have but not core to the partner review experience
Summary: Priority Matrix¶
| # | Improvement | Priority | Impact | Effort |
|---|---|---|---|---|
| 6 | Replace rand() analytics with real/deterministic data | Critical | Trust | Medium |
| 1 | Side-by-side diff review for content changes | High | Review UX | Medium |
| 3 | SEO feedback loop (performance -> optimization) | High | Content ROI | High |
| 8 | Notifications for content review events | High | Review flow | Low |
| 2 | Content style guide enforcement | Medium-High | Quality | Medium |
| 4 | Individual lead journey attribution | Medium | Sales insight | High |
| 5 | Unified conversational context across capabilities | Medium | UX | High |
| 7 | Explicit publish confirmation (remove auto-schedule) | Medium | Trust | Low |
| 9 | AI discoverability optimization | Low-Medium | Differentiation | Medium |
| 10 | Data visualization in content | Low | Content quality | Medium |
Recommended Implementation Order¶
- Replace simulated analytics (#6) -- quick trust fix
- Add review notifications (#8) -- low effort, high impact on approval flow
- Remove auto-schedule on approval (#7) -- small code change, important for trust
- Build content diff view (#1) -- core review UX improvement
- Add style guide system (#2) -- quality improvement that compounds
- Build SEO feedback loop (#3) -- highest ROI pattern from the write-up
- Remaining items by priority as resources allow