How to Avoid Fake Downloads When Sponsoring Podcasts Today

You approved the budget. The show had 80,000 monthly downloads. The CPM looked fair for that reach. Two months later, the data came back flat. No traffic spike, no sales lift, no clean signal that any of it worked.

Here’s the part worth knowing before you call it a channel problem: the medium wasn’t the issue. The numbers were. Specifically, those 80,000 downloads may not have represented 80,000 real people who heard your ad. Some of those requests came from automated scripts. Others auto-synced to phones that never pressed play. And in documented cases, some were purchased directly from mobile advertising networks built specifically to inflate show metrics.

This guide walks through exactly how podcast download fraud works, how to recognize it before you commit budget, and how to build a buying process that produces real results regardless of what a media kit says.

What This Guide Covers:

1. What a "download" actually counts and what it doesn't
2. The four ways podcast numbers get inflated before you see them
3. Why IAB certification isn't the protection most advertisers think it is
4. The red flags hiding inside every media kit you'll receive
5. How to verify a show's audience is actually real before you buy
6. A pre-buy checklist to run on every show before negotiating
7. How to set up attribution that works even when download numbers lie
8. What to do when you suspect fraud mid-campaign
9. How to make fraud prevention a permanent part of your buying process

1. What a Download Actually Counts as Real

A download is a file request. That is the complete technical definition. When a device pulls an episode file from a hosting server, that request registers as a download. It doesn’t require anyone to press play. It doesn’t confirm the file was listened to. It records that a device asked for the file and that’s it.

The Interactive Advertising Bureau sets measurement standards that certified platforms use to filter raw download counts. These rules exclude obvious bots, duplicate requests within a 24-hour window, and pre-loaded files that weren’t intentionally accessed. That baseline matters.

But here’s the gap that doesn’t appear in any media kit. IAB certification verifies that a platform is counting correctly using standardized rules. It does not verify that the files being counted were requested by actual human beings.

A script running across rotating mobile IPs, documented in technical research by fraud analyst Anthony Gourraud, can generate thousands of IAB-compliant downloads because the file requests technically meet the counting criteria. Compliance and fraud-free are not the same thing. That distinction is where everything else in this guide starts.

What to do: Treat every download figure as a ceiling, not a count. It tells you the maximum number of people who could have encountered your ad. Every check that follows tells you how many likely did.

2. The Four Ways Podcast Numbers Get Inflated

Understanding how fraud happens is the fastest way to recognize it. There are four main methods used to inflate podcast download figures, and each leaves a different kind of trail.

➤ The mobile game download trick

A publisher pays a mobile advertising company to display ads inside popular games. When a player taps the ad to earn in-game rewards, a podcast episode begins downloading to their device automatically. Each download registers as unique listener reach in advertiser reports, because technically, a real device on a real IP requested the file.

Bloomberg reporting found that iHeartMedia spent approximately $10 million over four years acquiring roughly six million unique monthly listeners through this method. The New York Post used similar tactics. None of those listeners had any meaningful relationship with the show.

➤ Script-based download inflation

Scripts sending repeated file requests to a hosting server can mimic normal listener behaviour across multiple IP addresses, rotated through VPNs, mobile networks, or distributed devices. The per-episode volume looks plausible because it’s calibrated to look that way.

The fingerprints show up in the data. Partial downloads that consistently stop at the same byte threshold. A disproportionate share of requests from mobile network IPs. Geographic concentrations in regions with no connection to the show’s claimed audience.

➤ Chart gaming through Apple ID manipulation

This method targets ranking position rather than raw download counts. Using large pools of Apple IDs, primarily sourced through overseas networks, promoters subscribe to shows at volumes high enough to move chart positions. A higher chart position appears more credible to advertisers evaluating by rank rather than verified data.

Chart position is a vanity metric for anyone buying ad placements. It says nothing about listener quality, engagement, or whether your specific buyer is in that audience.

➤ Passive auto-syncs from lapsed subscribers

This one isn’t deliberate fraud, it’s structural inflation. A listener subscribes, engages for a few months, loses interest, and never unsubscribes. Their app keeps downloading every new episode automatically. Those downloads count. That listener is gone.

On shows with several years of history and declining engagement, this passive subscriber residue quietly inflates current episode figures. A show that peaked two years ago may still report numbers that reflect a much larger active audience than it actually has.

What to do: When you see a show’s download figures, ask one question before going further: what would these numbers look like if passive auto-syncs and externally purchased traffic were removed? Sections 4 and 5 give you the tools to answer that before spending anything.

3. Why IAB Certified Doesn’t Mean Fraud-Free

IAB Tech Lab certification is worth something. Platforms that carry it Megaphone, Buzzsprout, Libsyn, Captivate, Podbean, and others, filter downloads using standardized rules applied consistently. That eliminates a real category of sloppy or inconsistent counting.

But the certification program was built to create industry-wide consistency, not to detect intentional manipulation. The guidelines are built on server log analysis. A device requests a file. The server logs the request. The platform applies its filtering rules to what remains. There is no step in that chain that verifies a human being intentionally pressed play.

For a download to pass IAB filtering, the episode file typically needs to be accessed past a minimum byte threshold, roughly equivalent to one minute of content. A script sending requests that reach that threshold counts. A gamer whose phone auto-downloaded an episode while earning in-game rewards counts. Both are IAB-compliant. Neither is necessarily a listener.

The IAB continues updating its guidelines, version 2.2 is the current standard. Progress is genuine and ongoing. The gap between compliant and fraud-free still exists.

What to do: Treat IAB certification as a floor requirement, not a green light. It tells you the platform isn’t careless about how it counts. It does not tell you the numbers are honest. Confirm certification and then continue with every check that follows.

4. Red Flags in Every Media Kit You’ll See

Media kits are sales documents. They present numbers in the best possible light, and the information that would raise questions rarely appears voluntarily. These are the signals worth looking for, some visible in what’s included, others only visible in what’s missing.

➤ Lifetime downloads leading the pitch

A show opening with “12 million downloads since 2018” is telling you its history, not its present. The only number that matters for your campaign is per-episode averages from the most recent 90 days. Shows with strong current figures share them willingly and specifically. Shows that redirect to lifetime totals when pressed for recent data are almost always protecting a number that doesn’t hold up.

➤ A spike that didn’t hold

Look at the download history across the last 12 to 18 months. One month where downloads jumped sharply, typically sitting six to twelve months back, followed by a sustained plateau or gradual decline is a pattern worth questioning directly. Organic audience growth doesn’t arrive as a vertical line. External download inflation often does.

➤ High browser download percentage

Most podcast listeners use native apps. When a show’s analytics show a disproportionate share of downloads coming from web players or browser-based requests rather than native apps, that’s an anomaly worth raising. Download inflation through mobile ad placements typically routes through browser-based delivery rather than app-based playback.

➤ No past sponsor has ever returned

A show with two or more years of advertising history where no sponsor in any category returned for a second placement tells you something without saying a word. Brands that find results renew quietly. Brands that find inflated numbers don’t come back and most never explain publicly why they left. One-time placements across an entire sponsor history are a pattern, not a coincidence.

What to do: Before requesting any additional data, scan the media kit specifically for these four signals. Each one alone warrants a question. Any two together should slow the conversation down. Three or more together is a reason to move to the next show on your shortlist before investing further time.

5. How to Verify a Show’s Audience Is Actually Real

You’ve reviewed the media kit. No immediate red flags. Now you verify. This is the step most brands skip entirely and it’s exactly where protection against fake downloads actually lives.

➤ Request 90-day episode averages in writing

Ask specifically for per-episode download averages from the most recent 90 days, not an annual figure, not a lifetime total. Episode-level numbers from the last quarter are what reflect your campaign’s actual reach potential. A show that responds quickly with specific data takes advertiser relationships seriously. A show that pushes back or redirects to aggregate figures is giving you real information before you’ve spent a cent.

➤ Ask for episode completion rate

Completion rate is the percentage of listeners who reach a defined threshold, typically 80% of total runtime. It’s the clearest signal of whether your mid-roll ad reaches an attentive audience. A show with strong download numbers but a completion rate the host can’t or won’t share deserves a direct follow-up before proceeding. Any show on Spotify for Podcasters or a certified hosting platform can pull this number without effort.

➤ Look for third-party verification

Podtrac independently verifies podcast audience size separate from what the hosting platform self-reports. Magellan AI tracks sponsor history and surfaces show-level performance data without relying on the show itself. If a show can point you to either, you’re working with independently verified figures. If it cannot, factor that into how much weight you give everything else it shares.

➤ Request the geographic breakdown

Ask for a geographic breakdown of the listener base, specifically what percentage of the audience is in the United States if that’s your target market. A show claiming a domestic audience but showing meaningful download volume from regions with no connection to your offer category warrants a direct conversation. Geographic concentration in countries with documented high rates of download fraud activity is one of the clearest quantitative red flags available at no cost to you.

➤ Read the Apple Podcasts reviews, not just the stars

Open Apple Podcasts. Find the show. Read the five most recent written reviews, the actual text, not the star rating. Do they describe a specific episode? Do they reference something the host said in a way only a real listener would frame? Generic five-star reviews with no specifics confirm almost nothing. Emotionally specific reviews from people describing their own experience with the content confirm that a real, engaged audience exists.

➤ Call one past sponsor reference

Ask the show for contact information for one brand that ran a campaign within the last twelve months and is willing to speak. A 15-minute conversation with a previous advertiser tells you more than any dashboard the show can provide. Ask what the results looked like, whether attribution was clean, and whether they would run again. A show that cannot produce a single reference or delays significantly in providing one, is a signal worth taking seriously.

What to do: Run all six checks before any budget conversation begins. A show that passes every one is worth negotiating with. A show that can’t or won’t complete more than two or three is not ready for your budget regardless of how the download numbers look.

6. Your Pre-Buy Checklist Before Signing Anything

This is the consolidated action list before any deal is finalized. Every item here connects directly to the fraud signals and verification checks covered in sections 4 and 5. Nothing here is new, this is the execution tool.

➤ Data to request in writing, before any negotiation:

  • Per-episode download averages from the last 90 days (episode-level, not aggregated)
  • Episode completion rate from hosting analytics or Spotify for Podcasters
  • Geographic breakdown showing country-level listener distribution
  • Platform distribution breakdown confirming which apps listeners use
  • IAB certification confirmation including the specific hosting platform name
  • One sponsor reference from the last twelve months willing to speak about results

➤ Verification to run independently, before signing:

  • Check Podtrac or Magellan AI for third-party data not sourced from the show itself
  • Read at least five written Apple Podcasts reviews from the most recent submissions
  • Pull the full episode publishing history and check for unexplained gaps or sudden volume changes
  • Search your category in Magellan AI to see whether competitors sponsored this show and whether any returned for more than one placement

➤ Attribution to confirm active before the first episode airs:

  • Unique promo code assigned specifically to this show, confirmed active and functional
  • Dedicated landing page or vanity URL built for this placement
  • Pixel-based tracking configured and firing correctly before launch

If three or more items on this list cannot be confirmed before you sign, the show is not ready for your budget. Come back when they are, or move to the next show on your shortlist.

7. Attribution That Works Even If Numbers Lie

Here’s the most important structural shift in how to buy podcast advertising: build a measurement system that produces real conversion data regardless of whether the download figures were honest.

When your results don’t depend on trusting a show’s self-reported numbers, inflated downloads stop being a risk to your ROI. They become someone else’s problem.

➤ Unique promo codes per show

Each show gets its own code. When a listener redeems it, you know exactly which show drove that action. Promo codes undercount total conversions. Many buyers search directly or visit without typing a code but they give you a confirmed, show-attributed baseline that download figures cannot.

➤ Vanity URLs specific to each show

A show-specific URL slug on your domain tracks visits and landing page activity independently of the host’s analytics. It captures more of the listener journey than promo codes alone and requires no trust in the show’s own data at any point.

➤ Pixel-based attribution

Platforms like Podscribe match your converted customers against listener IP and device data to identify which podcast episodes they heard before buying. According to Podscribe’s Q2 2025 Benchmark Report, pixel attribution uncovers nearly seven times more conversions than post-purchase surveys and over four times more than promo codes alone. This is the standard method for campaigns running at meaningful scale.

➤ Post-purchase surveys

One question at checkout “Where did you hear about us?” with podcast listed as an option alongside show names, surfaces conversions that every technical method misses. It’s low-effort and consistently underused.

Running all four from day one means your conversion picture is real, attributable, and completely independent of whether the show’s download numbers were honest. That’s the actual protection.

What to do: Treat attribution setup as a launch prerequisite, not a post-launch task. If any of the four tracking methods isn’t confirmed active before the first episode airs, delay the launch until it is. Every episode that runs without full attribution in place is budget you cannot account for afterward.

8. You Suspect Fraud Mid-Campaign. Now What?

You’re two or three episodes in. Promo code redemptions are at zero. Branded search hasn’t moved. The audience size should have produced at least some signal by now. Here’s how to handle it without drawing conclusions before you have answers.

➤ Check the attribution chain before anything else

Before concluding fraud, verify that the promo code is active, the landing page loads correctly on mobile, and the pixel is firing on your site. Attribution failures from broken tracking are more common than actual fraud. Eliminate the obvious explanations first.

➤ Request a raw data export from the hosting platform

Ask for a download data export directly from the hosting platform, not the media kit summary, the actual export file. Any show on a reputable host can produce this in minutes. Look specifically at the platform distribution and geographic breakdown. Patterns inconsistent with organic listener behaviour show up clearly in this data.

➤ Document before you raise it

If you find geographic anomalies, unusual platform distribution, or patterns consistent with artificial inflation, screenshot and save everything before any conversation with the host or network. You may need that documentation for a contract discussion.

➤ Raise it directly and specifically

A host who didn’t inflate numbers will want to help you understand what’s happening. One who did will struggle to explain the anomalies clearly. Frame the conversation around the specific data you found, not a general accusation. The quality of that response tells you most of what you need to know about how to proceed.

What to do: Don’t wait until the campaign ends to raise a concern. If the signals are there at episode three, raise it at episode three. Budget you haven’t spent yet can still be redirected. Budget spent on episodes seven and eight of a fraudulent show cannot.

9. Fraud Prevention That Lives in Your Process

Brands that don’t get burned by fake downloads aren’t necessarily the most skeptical ones. They’re the ones who made fraud prevention structural — built into the buying process so it happens automatically rather than show by show.

➤ Only buy shows on IAB-certified platforms

This doesn’t eliminate all fraud risk, but it eliminates the category of hosts who don’t track properly at all. Make platform certification a non-negotiable requirement before any show reaches your shortlist.

➤ Require third-party data before any significant buy

Make it a condition of any meaningful spend that the show provides Podtrac or Magellan AI data alongside its own analytics. Shows that take advertiser relationships seriously increasingly provide this without being asked. Shows that push back on the request are telling you something worth knowing before you commit.

➤ Set a three-to-five episode minimum for new shows

A single episode produces too little data to distinguish fraud from poor audience fit from bad timing. Three to five episodes with full attribution running gives you a pattern. One episode gives you a data point that could mean almost anything.

➤ Keep attribution active after baked-in campaigns end

Baked-in ads recorded directly into an episode rather than dynamically inserted, stay in that episode permanently. Every new listener who discovers the show through its back catalogue will hear your ad. If you let the promo code expire and take the landing page down, you’ve built a dead end into a placement that’s still actively delivering.

Keep every tracking method live for at least twelve months after any baked-in campaign ends. Check attribution data at the six-month mark. The conversions still arriving from episodes that aired months earlier are real budget value that most advertisers never account for.

What to do: Write down the standard you’ll apply to every show before the next media plan goes out. Certification required. Third-party data requested. Attribution confirmed before launch. Three-episode minimum before evaluation. Those four rules eliminate most of the risk without eliminating most of the opportunity.

Worth Keeping in Mind

The brands getting the best results from podcast advertising right now aren’t spending the most. They’re buying most carefully.

Podcast fraud exists. It’s documented. It’s been used by publishers large enough to make Bloomberg’s front page. But it’s detectable, preventable, and increasingly easy to route around with attribution methods that don’t depend on trusting anyone’s self-reported numbers.

The medium isn’t the problem. The question isn’t whether podcast advertising works. The question is whether you built a buying process designed to see it working and catch it when something else is happening instead.

References

Bloomberg / EMARKETER — “Podcasters Bought Millions of Fraudulent Listeners via Mobile Game Ads” — Investigation into iHeartMedia and New York Post using mobile ad networks to inflate download counts — emarketer.com, September 2022 — https://www.emarketer.com/content/podcasters-bought-millions-of-fraudulent-listeners-via-mobile-game-ads

Anthony Gourraud / Medium — “A New Model to Detect Thousands of Fake But IAB Certified Podcast Downloads” — Technical demonstration of script-based inflation that passes IAB filtering, with detection methodology — medium.com — https://anthony-gourraud.medium.com/a-new-model-to-detect-the-thousands-of-fake-but-iab-certified-podcast-downloads-i-got-20cee2e2eb39

IAB Tech Lab — “Podcast Measurement Technical Guidelines Version 2.2” — Current standards for IAB-compliant download counting, bot filtering, and pre-load rules — iabtechlab.com — https://iabtechlab.com/standards/podcast-measurement-guidelines/

ADOPTER Media — “Pixel Tracking: A Game Changer for Podcast Advertising” — Podscribe Q2 2025 data showing pixel attribution uncovers 7x more conversions than surveys and 4x more than promo codes — adopter.media, August 2025 — https://adopter.media/podcast-advertising-pixel-tracking/

The Podglomerate — “Chartable Alternatives for Podcast Ad Attribution in 2026” — Overview of the attribution landscape following Chartable’s December 2025 sunset, covering Podscribe, Magellan AI, and Podtrac — podglomerate.com, January 2026 — https://podglomerate.com/chartable-alternatives-2026/

HUMAN Security — “Audio Advertising Fraud: What You Need to Know” — Expansion of programmatic fraud into podcast and audio environments, verification challenges — humansecurity.com, January 2026 — https://www.humansecurity.com/learn/blog/audio-advertising-fraud-what-you-need-to-know/