Why every AI tool is built backwards
You know what good looks like. Every existing tool asks you to explain it before you've seen it. That's exactly backwards.
Every AI evaluation platform on the market today is built around the same assumption: that you can articulate what "good" looks like before you've seen what "good" looks like.
You can't. And that's the problem.
The standard no-one can write down
Think about the last time you read a piece of writing and immediately knew it was excellent. You didn't run it through a checklist. You didn't compare it to a rubric. You just knew.
Now try to explain exactly why it was excellent. What were the specific qualities? How would you weight them against each other? How would you write scoring instructions for each one?
You'd struggle. Not because you don't understand quality: you clearly do. The reason is that human judgment works by recognition, not by definition. We identify good things before we can explain why they're good.
Every existing AI tool ignores this entirely. They ask you to write the rubric before you've shown them a single example. That's like asking a chef to write a recipe for a dish they've never tasted.
What happens when you start from the answer
There's a simpler way. Show us what good looks like. React to what we show you. We'll figure out the pattern.
This is how taste actually develops: through exposure and reaction, not through abstract specification. Your implicit knowledge of quality is far richer than anything you could write down. We just need a way to draw it out.
That's what Meritus does. And it changes everything about how you work with AI.
Meritus is currently in private beta. Join the waitlist to get early access.