An AI content evaluation tool that tells coaches and business owners exactly why their content isn't converting — and what to fix.
SalesScore was built to solve a common but frustrating problem for coaches and service-based businesses: creating content consistently is hard, but creating content that actually converts into clients is even harder.
Most creators rely on guesswork, trends, or inconsistent feedback. SalesScore introduces structure — using a defined evaluation rubric (the Buying-Client Checklist) to analyse content and highlight exactly what's missing from a conversion perspective. Commissioned by a business owner for their paid membership, it uses the OpenAI Assistant API to score, critique, and improve content submissions.
Originally built as a no-code tool and later rebuilt into a custom app, eliminating platform dependency and giving full product control. The core challenge — getting the AI to apply the same rubric reliably on every single evaluation — was solved through deliberate prompt engineering, and has remained consistent since.
"Most feedback on content is vague. 'Make it better' isn't a system."
The core challenge was not generating content — it was evaluating it reliably. Feedback needed to be structured, not subjective. Results needed to be consistent across different inputs. The system had to reflect real buyer psychology, not generic writing advice. Outputs had to be actionable, not just descriptive.
Early versions struggled with variability in AI responses, making it difficult to trust the evaluation. Getting the AI to apply the same rubric with the same structure, depth, and specificity on every single run required moving beyond basic prompting — and building a real system.
A rubric-based evaluation system that scores content across key dimensions tied to conversion — built around the Buying-Client Checklist framework. Moves feedback from opinion-based to system-driven, identifying exactly what's missing from a conversion perspective.
The reliability problem was solved through deliberate prompt engineering — defining evaluation criteria, output format, scoring logic, and edge case handling so the system produces stable, trustworthy results on every single run.
Instead of generic suggestions, the system highlights exactly what's missing and how to improve it. Each evaluation returns prioritised, specific feedback tied directly to rubric criteria — making it usable, not just informative.
Originally built on Bolt for speed. Rebuilt as a standalone application to remove ongoing platform fees, gain full code ownership, and ensure the tool could evolve beyond a third-party platform's constraints.
By systemising how content is judged, it allows users to focus on improving rather than second-guessing.
Have a workflow that needs automating or a product that needs building? Let's talk.
Start a Project