Why Your Sales Team Should Run Like a Research Lab: The 100-Prospect Experiment
Try Valley
Make LinkedIn your Greatest Revenue Channel ↓

Zayd
Why Your Sales Team Should Run Like a Research Lab: The 100-Prospect Experiment
There's a version of outbound where every week looks like the last one. More messages out, some meetings booked, a dashboard that moves sideways. You get better at running the motion without ever getting smarter about your market.
Most 0–$1M ARR teams are living in that version right now.
The problem is the structure. You can't learn anything from a 10,000-send blast because you haven't isolated anything. You've done a lot of work and hoped the results tell you the story you want to hear.
There's a better way to run outbound, one where the real output is knowledge that compounds, not just meetings. The unit that makes it work: batches of 100 prospects.
My favorite finds of the week:
Expert framework for scaling SaaS products to 7-figures (link)
Free sales training from a $30k/year mastermind (link)
Secrets to structure and present SaaS pricing (link)
Don’t sell through users to get buyers (link)
Prompting in json or xml format increases LLM output by 10x (link)
The traditional outbound playbook is dead (link)
Start With a Belief, Not a List
Most outbound motions start with a list. "We pulled 5,000 founders who raised in the last 90 days." Great. Now what?
A research lab starts with a belief, a sentence that begins with "I believe" and ends with something falsifiable.
A good belief sounds like this:
I believe that seed-stage founders who raised in the last 60 days will respond to a message about hiring their first AE, because that's the pain they're feeling right now.
You can prove that wrong. You can prove it right. You can run it and find out. That's the whole point.
A bad belief sounds like this:
I believe we should message more founders.
You cannot prove that wrong because you haven't said anything. You've announced that you want to do more work.
Pull apart any great outbound batch and you'll find a specific, falsifiable belief underneath it. Pull apart a bad one and you'll find activity wearing a costume.
Build the Cohort to Match the Belief
Once you have a real belief, the list builds itself. You're finding the 100 people who most precisely match what you're testing not just the biggest number you can pull.
If the belief is about founders who raised in the last 60 days, you pull exactly that. Crunchbase, recent funding filter, narrow ICP filter, 100 names.
Done.
The urge you need to fight: expanding the cohort "to make sure we get enough data."
This is the moment where the lab becomes the factory again. You water down the hypothesis, water down the cohort, and at the end of the batch you have no idea what you learned because you were testing three things at once on four different kinds of prospect.
A precise cohort of 100 will tell you something real. A fuzzy cohort of 1,000 will tell you that you did a lot of work.
Test One Variable. Keep Everything Else Still.
This is where the discipline hurts.
Inside your batch of 100, pick one variable. For example: do people respond more to a message that references their recent funding, or a message that references a specific operational pain point?
Everything else: the CTA, the follow-up cadence, the timing, the channel stays identical.
Most teams run five experiments inside one batch. They change the opener, the proof point, the send time, the follow-up, and the channel then look at the results and say "something worked." Something did work. They will never know what.
Isolate one variable. Move it. Measure it. Next batch, move a different one. Over ten batches, you've mapped ten distinct things about how your ICP behaves. That map is worth more than every individual meeting you booked along the way.
(Insert image: clean table showing a 2-variable split test inside a 100-prospect batch variable A vs variable B, with all other fields marked "constant")
💡 LinkedIn Tip of the Week
Asking for nothing in your connection request increases acceptance rates. The ask belongs in the follow-up message, not the invite.
Measure What Predicts Revenue, Not What Flatters It
Reply rate is the metric every team puts in their Friday update. It's also close to useless on its own.
If 10% of people reply and 9 out of 10 of those replies are "unsubscribe" or "please stop emailing me," you have a 1% positive response rate and an active brand problem.
Track the metrics that actually matter:
Metric | Why It Matters |
|---|---|
Positive response rate | Interested replies only, removes noise from the numerator |
Meeting booked rate | How many positive replies converted to calendar |
Show rate | How many actually showed up |
Opportunity created | How many turned into real revenue potential |
A 4% reply rate that produces three closed-won deals is better than a 15% reply rate that produces none. The numbers at the top of the funnel are interesting. The numbers at the bottom are what your investors actually ask about.
Write It Down Every Single Batch
Here's the part almost nobody does, and it's the exact reason most sales teams never compound their learnings: they don't record what they just did.
Every batch gets a one-paragraph writeup:
What was the belief?
What was the cohort?
What was the variable?
What happened?
What's the next experiment?
Run this for a quarter and you have a document nobody else in your market has. Every batch, every variable tested, every answer earned all in one place.
When you hire your next AE, you hand them that doc on day one. They ramp in half the time because they're inheriting a real playbook instead of half-remembered vibes from whoever ran outbound six months ago.
At Valley, this is how we run our own outbound. Tight, hypothesis-driven batches. Everything measured. Everything written down. Our best customers run the same playbook and yes, we built a product that makes it significantly easier to execute. But you can run the first version of this in a spreadsheet starting tomorrow.
🎁 Gift Resource
The B2B Growth and Sales Creator Handbook: claim it here.
What You Actually Get Out of This
After one quarter of running outbound like a research lab, you know things about your market that your competitors literally cannot match:
Which segments respond to which framing
Which pain points are real versus vanity
Which channels your buyer actually uses — not the one everyone assumes they use
What your earned baseline response rate is, independent of any benchmark a sales newsletter quoted
That knowledge is a moat. Competitors can copy your emails in an afternoon. They cannot copy the three months of structured experiments you ran to figure out which emails to write.
Most 0–$1M ARR teams spend their first year drowning in activity and starving for insight. Ten batches of 100 this quarter instead of one big blast of 10,000 is the trade that flips that.
You'll send far fewer messages. You'll sleep better. And you'll end the quarter knowing more about your market than your investors do.
Almost nobody makes the trade. The ones who do win their category.
How Valley Supports Hypothesis-Driven Outbound
Valley is built around the same principle: precise, signal-qualified outreach over high-volume spray-and-pray.
Instead of pulling cold lists, Valley identifies warm prospects people who have already engaged with your LinkedIn profile, content, or website and qualifies them against your ICP before a single message goes out. That means every batch starts with higher signal, cleaner cohorts, and more reliable data.
When you're running structured outbound experiments, the quality of your starting cohort determines the quality of your learning. Valley makes that starting point tighter, faster, and safer for your LinkedIn account.
Frequently Asked Questions
What is hypothesis-driven outbound?
Hypothesis-driven outbound means starting every campaign with a specific, falsifiable belief about your ICP rather than a list and a hope. You define what you're testing, build a precise cohort to match the belief, isolate one variable, and measure what actually happened. The output is market knowledge that compounds over time, not just a batch of meetings.
Why use batches of 100 prospects instead of larger sends?
100 prospects is the size where you can think clearly about what you're testing. It's large enough to produce a meaningful signal, small enough to keep the cohort precise. Larger sends create noise: you end up testing multiple variables across multiple segments simultaneously and learn nothing actionable.
What metrics should outbound teams actually track?
Track positive response rate (interested replies only), meeting booked rate, show rate, and opportunity created. Raw reply rate is noisy and easy to game. The metrics at the bottom of the funnel meetings that showed up, opportunities with money attached are what compound into real business outcomes.
How do you prevent outbound experiments from blurring together? Write a one-paragraph summary after every batch: the belief, the cohort, the variable tested, the result, and the next experiment. This creates a searchable, transferable playbook that survives rep turnover and accelerates new hire ramp.
How does Valley help with structured outbound experiments?
Valley identifies warm, signal-qualified prospects people already engaging with your profile or content and filters them against your ICP before outreach begins. This gives your experiments a cleaner starting cohort and more reliable signal, compared to pulling cold lists where intent is unknown.
See more of Valley's outreach messaging examples: coolmessagebro.com
Generate more demos using LinkedIn: Book here
Become a Valley partner and earn 20% recurring commission: Join the affiliate program
Related Blogs

FEATURED READ
5 min
Valley LinkedIn Safety Features Explained
Read
Read

FEATURED READ
5 min
Why Your Sales Team Should Run Like a Research Lab: The 100-Prospect Experiment
Read
Read

FEATURED READ
5 min
The Attention Recession: Why Every Sales Channel Is Getting Worse at the Same
Read
Read

FEATURED READ
5 min
The Real Founder Timeline: 10 Years Before Valley's 'Overnight Success'
Read
Read
Which channels does Valley support?
Valley supports LinkedIn outreach, including connection requests and InMails. Valley users safely send 1000-1200 messages per seat every month.
How safe is it and does Valley risk my LinkedIn account?
Do I have to commit to an Annual Plan like other AI SDRs?
How does Valley personalize messages?
VALLEY MAGIC














