You put a show on your shortlist. The topic matches your product. The download numbers feel respectable. The host sounds credible from everything you have found online. So you moved to negotiations. A test budget. Three episodes. Then the data came back and nothing moved.
Here is what went wrong. The vetting stopped at the surface. Media kits are sales documents. Download counts are not listener counts. And a topic match is not audience fit. This guide gives you the full verification process, every check, in the exact order it needs to happen, before a single dollar goes anywhere.
What This Guide Covers:
1. Why download counts set your vetting up to fail
2. Seven items to request from any show before you dig deeper
3. How to verify that a show's audience is actually real
4. Platform and feed health checks that reveal how a show is run
5. Engagement signals that confirm listeners are paying attention
6. What one full episode tells you that no media kit ever will
7. How to read a show's sponsor history like a media buyer
8. Host credibility checks to run before you sign anything
9. Red flags that look harmless until your campaign ends
10. How to score every shortlisted show before your final decision
11. How to run a pilot before committing full budget
1. Download Counts Set Your Vetting Up to Fail
A download count records how many times an episode file was requested. That is all it does. It does not tell you whether a real person pressed play. It does not tell you whether they stayed for thirty seconds or the full hour. It says nothing about whether any of those listeners has any relationship with what you sell.
The Interactive Advertising Bureau (IAB) sets technical standards that filter out bots, duplicate requests, and automated server pings from raw download numbers. But even IAB-certified figures represent compliant downloads, not verified human listeners. There is a gap between those two things. No media kit will flag it.
➤ What the research says about size and performance
A 2025 analysis by Magellan AI found that shows in the 5,000 to 30,000 download-per-episode range consistently outperformed larger shows on a per-acquisition basis for direct response campaigns. Tighter audiences with smaller numbers almost always outperform bigger numbers with mixed ones.
The size of the room does not tell you who is sitting in it. The checks that follow tell you that.
2. Request These 7 Items Before You Vet a Show
Before you listen to a single episode or open a sponsor history tool, ask the show for these seven things in writing. Everything that follows in this vetting process depends on having actual data to evaluate, not media kit claims.
A show that responds quickly and completely is a show that takes sponsorships seriously. A show that pushes back or claims it does not usually share this information is giving you real information about how it handles advertiser relationships before you have spent a cent.
➤ Here is exactly what to ask for:
| Per-episode download averages from the last 90 days specifically. Not a lifetime total. Not an annual figure. Episode-level numbers from the most recent quarter. |
| Episode completion rate from their hosting analytics or Spotify for Podcasters dashboard. This tells you how long listeners actually stay, which determines whether your ad reaches an attentive audience or an empty room. |
| Geographic breakdown of their listener base. You need to know what percentage of the audience is in the country your offer actually serves. |
| IAB certification confirmation from their hosting platform. Ask which platform they use and whether it carries IAB Tech Lab certification. |
| One sponsor reference. A brand that ran a campaign on the show in the last twelve months and is willing to take a quick call or reply to an email. |
| Two or three recent ad reads. Either timestamped in specific episodes or played during a discovery call. You need to hear how this host handles sponsor content before you commit. |
| Tracking confirmation. Can they set up a unique promo code and a show-specific vanity URL before your campaign launches? If they cannot, attribution will be a problem from day one. |
| What to do: Build a shared spreadsheet where each column is one of these seven items and each row is one show on your list. Gaps in that spreadsheet are data points, not just missing fields. The shows that fill every column without prompting are the ones worth continuing. |
3. How to Verify a Show’s Audience Is Actually Real
You have the data you requested. Now the question is whether it reflects reality. Inflated numbers exist in podcast advertising, and they come from more places than most advertisers realize.
Automated bots request episode files without any human input. Some shows have used download-stuffing services to appear larger than they are. Legitimate but entirely passive subscribers download episodes and never press play. None of these listeners convert.
➤ Cross-reference the social footprint
A show claiming 80,000 monthly downloads should have some visible listener engagement, even loosely. Not a one-to-one ratio. Podcast listeners do not always follow on social, but a show with six-figure claimed downloads and no detectable interaction anywhere warrants a direct question. Ask the host where their listener base actually engages beyond the feed.
➤ Look for third-party verification
Podtrac is an independent measurement service that verifies podcast audience size. Shows using Podtrac have a third party vouching for their numbers rather than self-reporting. Chartable offers similar tracking with attribution data on top. If a show uses either, you are working with independently verified figures.
If the show cannot point you to any third-party data source, factor that into how much weight you give to everything else they have shared.
➤ Read the actual review text, not just the star rating
Open Apple Podcasts. Read the five most recent reviews. Do they describe a specific episode? Do they mention something the host said and explain why it landed? Do they sound like real people with genuine context for why they listen?
Generic five-star reviews with no specifics tell you almost nothing. Detailed, emotionally specific reviews tell you this audience has a real relationship with the content and that relationship is what carries your ad.
| What to do: If a show has strong numbers but cannot provide third-party verification, ask for the raw download data export directly from their hosting platform. Any reputable host can produce this in a few minutes. If they cannot or will not, that tells you something worth knowing before you proceed. |
4. Platform and Feed Checks That Reveal the Truth
The technical health of a podcast tells you things about how seriously it is run that no media kit will mention. These checks take fifteen minutes per show and surface problems before they become your problems.
➤ Confirm the hosting platform is IAB certified
You asked for this in section 2. Now verify it independently. Buzzsprout, Libsyn, Megaphone, Captivate, and Podbean all carry IAB Tech Lab certification. A certified platform means downloads are counted using standardized rules that exclude bots, duplicate server requests, and automated pings. A show that cannot confirm certification warrants sharper scrutiny on everything else it shares.
➤ Run the RSS feed through a validator
Copy the show’s feed URL (usually findable in the hosting platform’s public settings) and paste it into Cast Feed Validator or the W3C Feed Validation Service. A clean feed with no errors means the show publishes consistently and professionally. Persistent errors can affect distribution across Apple Podcasts and Spotify, which means some of the audience you are paying to reach may not be receiving episodes reliably.
➤ Check distribution across directories
Is the show available on Apple Podcasts, Spotify, and at least two other directories? A show distributed across five or more platforms has a broader, more resilient audience. A show that only exists on one platform has a single point of failure if that platform’s algorithm or recommendation behaviour changes.
| What to do: If the feed returns errors, do not treat it as an automatic disqualifier. Ask the host which analytics platform they use and whether those errors have affected listener delivery. Errors alongside strong self-reported data is a combination worth asking about directly. |
5. Engagement Signals That Prove Listeners Stay
You have confirmed the numbers are likely real and the technical infrastructure is sound. Now the question shifts: is this audience actively engaged or passively present? These four signals answer that question.
➤ Episode completion rate above 70%
You requested this number in section 2. Here is how to interpret it. Above 70% means your mid-roll ad reaches a listener who has already committed thirty to forty-plus minutes of sustained, uninterrupted attention. That is meaningfully different from reaching someone who dropped off in the first eight minutes. If the show cannot produce this figure, ask which analytics platform they use. Any show on Spotify for Podcasters, Podtrac, or Chartable can pull retention data without much effort.
➤ Publishing consistency over the last twelve months
Pull the full episode history. Check whether the show published on schedule without gaps for the past year. Every missed week represents a gap in your campaign continuity. A show with an unbroken weekly cadence for two or more years signals a production operation that treats its audience as a real commitment and treats its advertiser relationships the same way.
➤ Subscriber growth that has held over six months
Ask for listener or subscriber figures across the past six months. A steady upward slope signals an audience compounding through organic word-of-mouth. A sharp spike followed by a plateau often signals a one-time external event like a viral clip, a celebrity mention that brought in listeners who did not stay. One spike is not a growing audience. It is a single event that aged.
➤ Off-platform community activity
Search the show’s name on Reddit, in Facebook Groups, and on LinkedIn. Are there listener conversations that the host did not start? Are listeners recommending the show to each other in contexts that have nothing to do with the podcast itself?
The difference between a show with a loyal community and one without it is significant for direct response. Engaged communities carry and amplify sponsor mentions. Passive listener bases do not.
| What to do: Treat completion rate and publishing consistency as your two minimum thresholds before anything else on this list matters. If a show cannot provide completion data and has visible gaps in its publishing history, weight everything else it presents accordingly. |
6. What One Full Episode Tells You No Kit Will
Reading a media kit tells you what a show wants you to believe about itself. Listening to one full, recent episode tells you what the show actually is. These two things are often different.
Block sixty minutes. Pick a recent episode, something from the last four weeks, not a highlights compilation or a best-of replay. Listen from start to finish with specific attention on three things.
➤ How the host handles existing ad reads
Does the delivery shift noticeably when a sponsor break starts? A tone change listeners can hear disconnects your ad from the trust the host built in the first twenty minutes. The best host-read ads sound identical to the rest of the episode. If you can clearly feel the commercial moment begin, every listener can too and many will tune out.
➤ How the host explains complex topics
If your product requires any explanation at all, pay close attention to how this host handles nuance during the episode itself. A host who simplifies clearly will do the same for your product. A host who gets vague when topics get detailed will carry that same vagueness into your ad read.
➤ Whether a real audience is present in the episode
Does the host reference listener questions, community moments, or direct feedback during the episode? Shows where the host talks to a real, responsive audience feel different from shows where the host talks at an abstract one. That distinction carries directly into how listeners receive sponsor messages. The first kind of host is someone listeners trust and act on. The second is someone they follow while distracted.
| What to do: Listen specifically to how existing ad reads are placed, paced, and delivered. That is your clearest preview of what your ad will sound like in this show. If it sounds like an interruption rather than a continuation of the episode, factor that into your brief or move the show down your priority list. |
7. How to Read Sponsor History Like a Media Buyer
A show’s sponsor history is commercial intelligence that other brands paid to generate. You are accessing their conclusions without running the experiment yourself.
➤ Where to find it
Magellan AI tracks podcast sponsorship data across thousands of shows. Search a show’s name and you will see which brands have advertised, for how long, and whether they returned. You can also do this manually. Listen to episodes from six months ago and note every sponsor. Then check whether those same brands are still advertising on the show today.
➤ What a strong sponsor history signals
Two or three brands in your general category have advertised across multiple consecutive episodes. At least one ran a second campaign after a break. The relationships have duration, not just a single placement followed by silence. When brands return, it is because something moved. No one quietly renews out of goodwill.
➤ What a weak sponsor history signals
A rotating cast of brands, none staying longer than one episode, no repeats of any kind. This pattern appears consistently on shows that sell media slots to advertisers who do not measure carefully. Brands that find results do not leave quietly. If no one in any category returned for a second placement across two or more years of history, assume there is a conversion problem the show will not raise on its own.
| What to do: Ask the show directly which past sponsors in your category came back for a second campaign. The answer and how readily they give it tells you a great deal about what those earlier advertisers found. |
8. Host Credibility Checks to Run Before Signing
The host is what you are actually buying when you book a host-read ad. Their credibility with the audience is the mechanism through which your ad works. A show can pass every technical check and still underperform if the host has no genuine connection to your product category.
➤ Search the host’s name alongside your product category
Has this host written about, discussed, or engaged with topics related to what you sell without being paid to? A host who references productivity tools in their newsletter, recommends operational software on their own social feed, and regularly interviews people in your space has real context for your product. That context makes an ad read feel like a natural extension of who the host is. Without it, the read is just a commercial that the audience can feel the edges of.
➤ Run a quick controversy check
Search the host’s name alongside the word “criticism” or “controversy.” Five minutes. Some results will be completely irrelevant. Some will not be. Finding a brand-safety issue before you sign is significantly better than finding it after placements have aired.
➤ Check how they handle existing sponsors off-air
Look at the host’s social content around the time of a recent sponsored episode. Do they mention the sponsor’s product unprompted outside the show? Do they appear to actually use what they advertise? A host who organically references a product they have promoted is giving their audience continuous, low-pressure reinforcement of that message. A host who reads copy once and never references it again is giving you a different signal entirely.
| What to do: On the discovery call, ask the host directly: “Can you describe a specific type of listener who would genuinely benefit from what we offer?” A host who knows their audience gives you a specific person with specific context. A host who responds with demographic generalities is describing a media kit, not a community they know. |
9. Red Flags That Look Fine Until It Is Too Late
Some warning signs are obvious. Others read as harmless until the campaign ends and nothing converted. These are the ones worth recognizing before you sign. Any single one of the following is worth a direct follow-up question. Three together is a reason to pass and move to the next show on your list.
➤ The spike that did not hold.
You pull the subscriber growth chart and there is one month (roughly eight or nine months back) where downloads tripled. Then they dropped and barely moved since. One external event can inflate numbers temporarily, and that inflated number shows up in an averaged monthly figure with no context. Always ask directly: what caused the highest single month in the last twelve?
➤ Lifetime downloads leading the conversation.
A show that opens with “9 million downloads since 2017” is telling you its history. What you need is its current audience. Per-episode averages from the last 90 days is the only figure that matters for your campaign. Shows with strong current numbers share them willingly and specifically. Shows that redirect to lifetime figures when pressed for recent ones are almost always protecting a number that does not hold up.
➤ Every past sponsor was a one-episode buy.
One-off placements are common for newer shows. They are unusual for an established show with years of history. If every sponsor across the past two years bought a single episode and none returned, with no exceptions, assume there is a conversion problem the show will not volunteer.
➤ The host cannot describe their own listener.
If you ask on a call for a specific description of who listens and you receive demographic generalities in return, that is a signal. A host who genuinely knows their community can describe a real person with real context without hesitation.
➤ No recent reviews anywhere.
Reviews arrive continuously on shows with active, loyal audiences. A show where the most recent Apple Podcasts review is dated many months ago suggests the invested listener base has drifted. New casual listeners rarely leave reviews. Loyal ones do.
10. Score Every Shortlisted Show Before Deciding
Gut feeling is not a media plan. Rate each shortlisted show from 1 to 5 across these eight criteria, total the score, and use the thresholds below to guide your final decision. Every criterion maps back to a check you have already completed at this point in the process.
| Criteria | Score 1 | Score 5 |
|---|---|---|
| Audience fit | Vague category overlap | Near-exact match to your buyer |
| Completion rate | Below 50% or unavailable | Above 75%, confirmed in writing |
| IAB compliance | Platform not certified | Fully certified, confirmed |
| Audience authenticity | Self-reported only | Podtrac or Chartable verified |
| Publishing consistency | Multiple gaps in 12 months | Unbroken cadence for 12+ months |
| Sponsor retention | No repeats in any category | 2+ repeat sponsors in your category |
| Host credibility | No organic category connection | Deep, demonstrable alignment |
| Attribution readiness | No unique tracking available | Promo code and URL confirmed |
Score thresholds:
- 33–40: Confirmed buy. Move to negotiation.
- 22–32: Test candidate. Budget for two or three episodes with tight attribution before scaling.
- Below 22: Pass. Move to the next show on your list.
Pro Tip: Build this scorecard in a shared spreadsheet with one row per show and one column per criterion. Add a notes column for anything your research surfaced that the score alone does not capture, like a red flag from the episode listen, a sponsor reference that was slow to respond, a feed error that came up. The transparency removes most of the disagreement from your final shortlist conversation.
11. Run a Pilot Before Committing Full Budget
Even after every check in this guide, the first campaign on any new show is still a test. Treat it as one.
Cap your initial run at two to three episodes. Define in writing what a successful result looks like before the first episode drops, not after you have seen the data and are deciding how to interpret it. Set your target cost per acquisition before any negotiation begins. If you have not set that number independently, you will find yourself rationalizing whatever number comes back.
➤ What a working pilot looks like at 60 days
A show that scores well on your vetting scorecard and delivers results within 20% of your target cost per acquisition after a three-episode run is worth scaling. Double the episode count. Negotiate a monthly package. Ask about category exclusivity for the next period.
A show that passes vetting but underdelivers in the first cycle deserves one honest review before you walk away. Was the brief clear enough? Did the host follow what you needed? Was attribution capturing everything it should? If one of those variables was off, adjust it and run one more cycle. If all three were clean and results still were not there, redirect the budget and move on.
➤ Keep the tracking active after the campaign ends
If you ran baked-in ads, ads recorded directly into the episode rather than dynamically inserted, those placements stay in that episode permanently. Every new listener who discovers the show through its back catalogue will hear your ad. That is meaningful long-tail value most advertisers never account for.
Keep your promo code and landing page active for at least twelve months after the campaign ends. Set a calendar reminder to check the attribution data at the six-month mark. You may find conversions still arriving from episodes that aired months earlier.
Platforms like MillionPodcasts provide filtering capabilities around audience demographics, engagement signals, and sponsorship history that help advertisers move through the early stages of vetting faster. But the confirmation work like listening to episodes, reading reviews, calling sponsor references, checking the feed, still happens away from any filter. That part requires time and judgment, not just a search.
| What to do: Write your renewal threshold before the campaign launches. If a show delivers at or below your target cost per acquisition, you renew. If it misses by more than 40% after a full 60-day window with clean attribution and a complete brief, you move on. Having this number in writing before any episode airs removes the emotion from the renewal conversation later. |
The Vetting Process in Order
The checks in this guide are sequential by design. You cannot meaningfully interpret a completion rate before you have the data. You cannot assess a sponsor history before you understand what audience you are evaluating it for. And you cannot score any show before you have run every check that feeds into that score.
Skip the data request and your evaluation has no foundation. Skip the scorecard and your final decision has no logic. Skip the attribution setup and your pilot produces no story worth acting on. None of this is complicated. But the order matters as much as the individual checks.
Six out of every ten podcast listeners have purchased from a brand they heard advertised on a podcast. That conversion does not happen because a show had large download numbers. It happens because the right product reached the right listener through a host they already trust, and that only happens when the vetting that came before it was done properly.
What has your experience been with shows that looked right on paper but underdelivered? If there is a check you wish you had run before committing budget, it is worth sharing below.
Citations
- Magellan AI — Q1 2025 Quarterly Benchmark Report — Direct response performance by show size; mid-tier show outperformance data — magellan.ai — https://www.magellan.ai/news-insights/podcast-advertising-benchmarks-q1-2025
- IAB Tech Lab — Podcast Measurement Technical Guidelines Version 2.1 — Standards for IAB-compliant download counting and bot filtering methodology — iabtechlab.com — https://iabtechlab.com/standards/podcast-measurement-guidelines/
- Podtrac — Industry Rankings and Audience Measurement Methodology — Third-party verification standards for podcast audience size — podtrac.com — https://podtrac.com/industry-rankings/
- ADOPTER Media — Podcast Advertising Guide: How High-Performance Campaigns Are Built — Baked-in ad long-tail value; show-specific vs. programmatic performance — adopter.media, January 2026 — https://adopter.media/podcast-advertising-guide/
- AD Results Media — 2026 Podcast Advertising Guide: Effectiveness, Statistics and More — Attribution window benchmarks; host-read ad conversion behavior — adresultsmedia.com, January 2026 — https://www.adresultsmedia.com/news-insights/is-podcast-advertising-effective/