From Feature to Revenue: How Ad Delivery Decisions Get Made on Mobile

· 11 min read

From Feature to Revenue: How Ad Delivery Decisions Get Made on Mobile

Every feature in a mobile app exists on a spectrum. On one end, it serves the user. On the other, it serves the business. The best features do both. But the moment a feature intersects with ad delivery, the decision-making process becomes something else entirely, a negotiation between competing priorities that most engineers never see from the outside.

I've spent years sitting in the rooms where these decisions get made. Here's what actually happens.

The Anatomy of an Ad Delivery Decision

When someone says "we need to monetize this surface," that sentence carries more weight than most people realize. It triggers a chain of decisions that spans product, engineering, data science, sales, policy, and legal. Each group has a different definition of success, and none of them are wrong.

Product:      "Will this hurt engagement?"
Engineering:  "Can we build this without regressing performance?"
Data Science: "What's the expected revenue lift?"
Sales:        "Can we promise this inventory to advertisers?"
Policy:       "Does this comply with our ad standards?"
Legal:        "Are there regulatory concerns in certain markets?"

The engineer's job is not just to build the ad slot. It's to understand all six of these perspectives well enough to make the right technical trade-offs.

The Decision Framework

Over the years, I've developed a mental model for how ad delivery decisions actually flow. It's not linear. It's a series of gates, and each gate has a different gatekeeper.

Gate 1: Surface Eligibility

Before any ad logic runs, the system must determine whether a surface is even eligible for ads. This sounds trivial. It is not.

data class SurfaceEligibility(
    val surfaceType: SurfaceType,
    val userSegment: UserSegment,
    val geoRegion: GeoRegion,
    val contentContext: ContentContext
) {
    fun isEligible(): Boolean {
        // Some surfaces are never monetized
        if (surfaceType in EXCLUDED_SURFACES) return false
 
        // New users get an ad-free grace period
        if (userSegment.accountAgeDays < AD_FREE_PERIOD_DAYS) return false
 
        // Some regions have regulatory restrictions
        if (geoRegion.hasAdRestrictions()) return false
 
        // Sensitive content contexts suppress ads
        if (contentContext.isSensitive()) return false
 
        return true
    }
}

The list of exclusions grows over time. Crisis events, content sensitivity, user age, regional regulations, premium subscriptions. Each one adds a branch to the eligibility tree. What starts as a simple boolean becomes a policy engine.

Gate 2: Demand Availability

Even if a surface is eligible, there may not be advertiser demand to fill it. Running an ad request that returns empty is worse than not requesting at all because it wastes battery, bandwidth, and latency budget for nothing.

class DemandPredictor(
    private val historicalFillRates: FillRateStore,
    private val userProfile: UserProfile
) {
    fun shouldRequestAd(surface: Surface): Boolean {
        val predictedFillRate = historicalFillRates.getPrediction(
            surface = surface.type,
            geo = userProfile.geo,
            timeOfDay = Clock.currentHour(),
            dayOfWeek = Clock.currentDayOfWeek()
        )
 
        // Don't waste a network call if fill rate is below threshold
        return predictedFillRate > MIN_FILL_RATE_THRESHOLD
    }
}

This is one of the most underappreciated parts of ad delivery engineering. The decision to not request an ad is just as important as the decision to show one. Every unnecessary ad request adds latency to the user experience and burns through the device's battery budget. On mobile, where users are on metered connections and finite battery, this matters enormously.

Gate 3: Placement Quality

The ad has demand. The surface is eligible. Now: where exactly does the ad go?

This is where the engineering gets genuinely hard. Placement quality is a function of:

  • Viewport position: Is the ad in a position where it will actually be seen?
  • Content adjacency: What organic content surrounds the ad? Does it create a jarring experience?
  • Scroll velocity: If the user is scrolling fast, placing an ad now means it won't register. It's a wasted impression.
  • Session depth: How far into the session is the user? Early-session ads have different engagement patterns than late-session ads.
class PlacementScorer {
    fun score(placement: AdPlacement, context: SessionContext): Double {
        var score = 1.0
 
        // Penalize placements during high-velocity scrolling
        if (context.scrollVelocity > FAST_SCROLL_THRESHOLD) {
            score *= 0.3
        }
 
        // Boost placements after natural content breaks
        if (placement.followsContentBreak) {
            score *= 1.4
        }
 
        // Penalize if user just saw an ad recently
        val timeSinceLastAd = context.timeSinceLastAdMs
        if (timeSinceLastAd < MIN_AD_INTERVAL_MS) {
            score *= 0.1
        }
 
        // Session depth adjustment
        score *= sessionDepthMultiplier(context.sessionDepthMinutes)
 
        return score.coerceIn(0.0, 2.0)
    }
 
    private fun sessionDepthMultiplier(minutes: Int): Double = when {
        minutes < 2 -> 0.7   // Light touch early in session
        minutes < 10 -> 1.0  // Normal density
        minutes > 30 -> 1.2  // Engaged users tolerate slightly more
        else -> 1.0
    }
}

The placement scorer is where art meets engineering. The numbers above aren't arbitrary. They're the result of dozens of experiments, each one testing a slightly different hypothesis about when and where ads perform best without degrading the experience.

Gate 4: Render or Defer

The final gate is the most time-sensitive. The ad has been fetched, scored, and assigned a placement. But between the moment the decision was made and the moment the ad needs to render, the context may have changed.

class AdRenderGate(
    private val viewportTracker: ViewportTracker,
    private val performanceMonitor: PerformanceMonitor
) {
    fun shouldRender(ad: PreparedAd): RenderDecision {
        // If the device is under memory pressure, defer
        if (performanceMonitor.isUnderMemoryPressure()) {
            return RenderDecision.Defer(reason = "memory_pressure")
        }
 
        // If frame rate has dropped, defer to avoid making jank worse
        if (performanceMonitor.currentFps < MIN_FPS_FOR_AD_RENDER) {
            return RenderDecision.Defer(reason = "low_fps")
        }
 
        // If the ad slot has scrolled out of the prefetch window, discard
        if (!viewportTracker.isInPrefetchWindow(ad.targetPosition)) {
            return RenderDecision.Discard(reason = "out_of_viewport")
        }
 
        return RenderDecision.Render
    }
}

This gate is invisible to most people, but it's the difference between a smooth app and one that stutters every time an ad loads. On lower-end devices, which represent the majority of the global Android market, this gate fires constantly. Deferring an ad render during a frame drop isn't lost revenue. It's preserved trust.

The Trade-offs Nobody Talks About

Revenue vs. Performance Budget

Every ad has a performance cost. Creative rendering, network calls, JavaScript execution for interactive formats. Each one draws from the same budget that organic content uses. The question is never "can we show an ad here?" It's "can we show an ad here without the user noticing the cost?"

Performance budget for a single screen:
├── Organic content rendering:  60%
├── Navigation + UI chrome:     15%
├── Ad rendering:               15%
└── System overhead:            10%

When the ad rendering exceeds its budget, it borrows from organic content. The user doesn't think "that ad was slow." They think "this app is slow." That attribution error is why performance budgets for ads matter more than most revenue teams realize.

Fill Rate vs. Quality

A 100% fill rate sounds ideal. Every ad slot generates revenue. But fill rate and ad quality are inversely correlated at the margins. The last 10% of fill comes from lower-quality demand: ads with heavier creatives, lower relevance, or more aggressive calls to action.

The engineering decision is where to set the floor:

object QualityFloor {
    fun meetsMinimumQuality(ad: AdCandidate): Boolean {
        // Reject ads with excessive file sizes
        if (ad.creativeSizeKb > MAX_CREATIVE_SIZE_KB) return false
 
        // Reject ads with known-slow render paths
        if (ad.format in SLOW_FORMATS && !DeviceCapability.isHighEnd()) return false
 
        // Reject ads with low predicted relevance
        if (ad.predictedRelevanceScore < MIN_RELEVANCE_SCORE) return false
 
        return true
    }
}

Rejecting a bad ad feels like leaving money on the table. But showing a bad ad costs more than the revenue it generates. It trains users to ignore ad slots, which degrades performance for the good ads that come later.

Latency of Decision vs. Accuracy of Decision

Faster ad decisions mean lower latency. But faster decisions are made with less information. The system is constantly balancing:

  • Eager decisions (decide at content load time): Fast, but based on predicted context, not actual context.
  • Lazy decisions (decide at render time): Accurate, but adds latency when the user is actively scrolling.
  • Speculative decisions (decide eagerly, verify lazily): Best of both worlds, but doubles the engineering complexity.

Most mature ad delivery systems use the speculative approach. Pre-fetch the ad based on predictions. Verify the decision at render time. Discard if the context has changed. It's more code to maintain, but the user experience improvement is measurable.

What Makes Mobile Different

Desktop ad delivery is relatively straightforward by comparison. The screen is large, the connection is usually stable, bandwidth is abundant, and the device has power to spare. Mobile inverts every one of these assumptions.

Screen real estate is finite. Every pixel given to an ad is a pixel taken from content. On a 6-inch screen, a single ad can occupy 40% of the viewport. The placement decision carries more weight.

Network conditions are variable. A user on a subway tunnel loses connectivity mid-ad-fetch. The system needs graceful degradation: prefetched fallbacks, timeout handling, partial render recovery.

Battery is a shared resource. An ad that plays a 15-second video auto-play at full brightness burns battery that the user mentally attributes to the app, not the ad. Battery-aware ad delivery isn't a luxury. It's a requirement for long-term retention.

Device fragmentation is real. The same ad creative that renders beautifully on a flagship phone can cause frame drops on a device with 2GB of RAM. Ad delivery on mobile requires device-capability-aware creative selection.

class DeviceAwareCreativeSelector {
    fun selectCreative(candidates: List<AdCreative>, device: DeviceProfile): AdCreative? {
        val eligible = candidates.filter { creative ->
            when (device.tier) {
                DeviceTier.HIGH_END -> true
                DeviceTier.MID_RANGE -> creative.complexity <= Complexity.MEDIUM
                DeviceTier.LOW_END -> creative.complexity == Complexity.SIMPLE
                    && creative.sizeKb < LOW_END_SIZE_LIMIT_KB
            }
        }
 
        return eligible.maxByOrNull { it.expectedRevenue }
    }
}

This kind of device-aware selection is invisible to advertisers and users alike. But it's the reason the same app can monetize effectively across a Pixel 9 and a budget device sold in emerging markets.

The Human Side

Behind every ad delivery decision is a room full of people making judgment calls. The data informs the decision, but it doesn't make it.

I've seen teams agonize over a 0.3% revenue gain that comes with a 0.1% retention regression. I've watched engineers push back on a product ask because they knew the performance cost would compound over time. I've sat in reviews where the right answer was "we don't know yet, let's run the experiment."

The best ad delivery engineers I've worked with share a common trait: they hold the user's experience and the business's goals in their head simultaneously, without letting either one dominate. They build systems that are opinionated about quality but flexible about strategy. They instrument everything because they know that today's gut feeling will be tomorrow's A/B test.

Lessons Learned

  1. The best ad is the one the user doesn't resent. Revenue per impression matters less than revenue per user over their lifetime. Optimize for the long game.

  2. Performance is a feature, not a constraint. Every millisecond of ad-related latency is a choice. Make it consciously.

  3. The decision not to show an ad is a decision. Treat empty ad slots as intentional product choices, not failures. Sometimes the highest-value action is showing nothing.

  4. Instrument the full funnel. From eligibility check to render completion, every step should emit metrics. The bugs that cost the most money are the ones in the decision logic, not the rendering.

  5. Respect the device. A monetization strategy that only works on flagship phones is a monetization strategy that doesn't work. Build for the median device, not the best one.

  6. Ship decisions, not just code. The ad delivery system is a decision engine. The code is an implementation detail. Focus on the quality of the decisions, and the revenue follows.

Ad delivery on mobile is one of those domains where the engineering challenge and the business impact are perfectly aligned. Every technical decision, where to place an ad, when to fetch it, whether to render it, has a direct, measurable effect on both user experience and revenue. That's what makes it hard. That's also what makes it worth doing well.

Related Posts