Grant Simmons, Author at Kochava Kochava Wed, 29 Nov 2023 19:37:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 https://s34035.pcdn.co/wp-content/uploads/2016/03/favicon-icon.png Grant Simmons, Author at Kochava 32 32 Unlocking the Power of Your Marketing Data https://s34035.pcdn.co/blog/unlocking-the-power-of-your-marketing-data/ Tue, 28 Nov 2023 16:02:50 +0000 https://www.kochava.com/?p=51890 The post Unlocking the Power of Your Marketing Data appeared first on Kochava.

]]>

What Kochava Foundry Can Do for Your Brand

In today’s digital age, data is the unsung hero behind successful marketing campaigns. It’s like the wizard behind the curtain, making the magic happen. Brands need accurate, timely, and trustworthy data to make informed decisions, optimize their advertising efforts, and connect with their target audience effectively. Enter Kochava Foundry, our trusty sidekick, here to help you harness the power of data with a bit of a wink and a nod. In this blog, we’ll explore what Foundry can do for your brand, all while sneaking in a dash of wry humor.

1. Data Source Validation: Separating the Gems from the Cubic Zirconia

Let’s face it: not all data sources are created equal. Some are as reliable as your GPS, while others might lead you down a rabbit hole. Foundry starts by doing what we like to call “data source validation.” We’re like the data bouncers checking IDs at the door. We ensure that your data sources are the real deal—accurate, complete, timely, and as secure as a secret agent’s briefcase.

With Foundry on your side, you won’t have to worry about data that’s faker than a spray-on tan. We’ve got your back, and we won’t let your brand fall victim to unreliable data.

2. Data Quality Assurance: Polishing Your Data Crown Jewels

Data quality is the crown jewel of marketing success. It’s like having the Hope Diamond in your marketing toolkit. Foundry takes data quality seriously, making sure your data shines brighter than a supernova. We perform meticulous data quality assurance checks to spot any data blemishes or imperfections. Think of us as the data beauty therapists, making sure your data looks flawless.

3. Timely Data Delivery: We’re Not a Pizza Delivery Service (But Close)

In the world of digital marketing, timing is everything. Foundry ensures that your data arrives on time, every time. We understand that delayed data is like cold pizza—nobody wants it. So, rest assured that your data will be as punctual as a Swiss watch.

4. Data Security: Better than Fort Knox for Your Data

Security is our middle name (well, not really, but you get the point). We treat your data like it’s a national treasure. Foundry takes stringent measures to protect your data during its journey, making sure it’s secure every step of the way..

5. Data Source Reviews: Tea Time with Data Providers

Foundry goes the extra mile by establishing a tête-à-tête with data source providers. It’s like having tea time with your data buddies. We keep the lines of communication open to address any data-related issues promptly. We’re like the friendly neighborhood data watchdogs.

6. Actionable Insights: The Sherlock Holmes of Data Analysis

With Foundry, you gain access to actionable insights that Sherlock Holmes himself would envy. We help you decipher data, spot trends, and make data-driven decisions. Think of us as your trusty Watson, guiding you through the mysteries of your data.

7. Compliance and Industry Standards: Staying on the Right Side of the Law

We make sure your data sources play by the rules, just like a stern school principal. Foundry helps ensure you understand compliance with industry standards and regulations, keeping your brand out of hot water.

Foundry is Your Data Superhero

In summary, Foundry is your data superhero, here to help you make data-driven decisions with a hint of wry humor. Don’t let your brand’s success be left to chance—partner with Foundry, and let’s embark on a data-driven adventure together.

Reach out to us today to learn more about how Foundry can bring a touch of levity while we help supercharge your data management efforts. After all, who said data had to be boring?

Foundry is Your Data Superhero

The post Unlocking the Power of Your Marketing Data appeared first on Kochava.

]]>
Navigating the Ad Spend Jungle https://www.kochava.com/blog/navigating-the-ad-spend-jungle/ Tue, 24 Oct 2023 18:53:35 +0000 https://www.kochava.com/?p=51571 The post Navigating the Ad Spend Jungle appeared first on Kochava.

]]>

How Insight Packs from Kochava Foundry™ Light the Way

In the intricate landscape of digital marketing, brands confront a plethora of challenges. These obstacles, ranging from misattributed user acquisitions to the ever-changing ad realm, often leave advertisers in a perplexing situation. How can they confidently allocate their precious ad spend, knowing they’ll be scrutinized over the final outcome (success or failure)?

Kochava Foundry, with its revolutionary Insight Packs, emerges as a beacon, guiding brands to make informed and impactful decisions based on expert analysis.

Deep Dive into Insight Packs

Foundry, always at the forefront of innovation, offers two trailblazing Insight Packs tailored for today’s marketing conundrums:

Incremental Intent: In a world where every network claims superior customer acquisitions, Incremental Intent emerges as the truth-seeker. By meticulously calculating the variance between organic and driven installs, this tool offers a crystal-clear perspective. Brands can, therefore, redirect their budget towards avenues that genuinely amplify their advertising impact.

Loyalty and Engagement: The modern consumer is discerning and volatile. Retaining their loyalty is a Herculean task. This Insight Pack offers a magnifying glass into your media strategy’s real impact. By highlighting how different channels and campaigns influence customer loyalty and engagement, brands get a roadmap. Following this, they can judiciously adjust their spend, maximizing ROAS and fine-tuning acquisition strategies.

Dissecting and Addressing 10 Key Pain Points

Let’s delve deeper into key marketing challenges and explore how Kochava Foundry’s tools and expertise pave the path to solutions:

1. Attribution Confusion:
Our sophisticated attribution platform delves beyond surface-level data. By leveraging both deterministic attribution and probabilistic modeling, we ensure an unambiguous view of user acquisition sources. Brands can then confidently reward the deserving networks for the conversions they actually drove.

2. Suboptimal Ad Spend:
The Incremental Intent Insight Pack stands out as the sentinel guarding against wasteful ad spend. By distinguishing between organic and campaign-driven acquisitions, it provides a nuanced understanding, helping brands streamline their budgets for optimal impact.

3. Low Customer Engagement:
Our advanced engagement analytics dive deep into user behavior post-installation. When merged with insights from the Loyalty and Engagement Insight Pack, brands receive a comprehensive view of any discrepancies. This enables a recalibration of ad messaging and the user experience to better align.

4. Short-term User Retention Woes:
Our retention analytics meticulously chart out user behavior trajectories post-install. Brands gain unparalleled clarity on user drop-off points, enabling them to refine onboarding and engagement touchpoints.

5. ROI Uncertainty:
Our detailed ROAS reports break down the performance of networks and campaigns, segment by segment. This granular view empowers brands to discern the genuine high-performers, ensuring investments that promise tangible returns.

6. Over-reliance on a Few Networks:
Our exhaustive performance metrics catalog offers a panoramic view of multiple networks. Brands, thus, are nudged to venture beyond their comfort zones, discovering uncharted territories in the advertising world.

7. Lack of Actionable Insights:
Kochava Foundry transcends traditional data offerings. With a blend of strategic consultations and expert-backed recommendations, brands receive a clear, actionable blueprint for the future.

8. The Ever-Changing Ad Landscape:
At Kochava, we pride ourselves on our agility. As digital advertising undergoes metamorphoses, from privacy regulations to emerging platforms, we ensure brands aren’t left in the lurch. With timely guidance, integration advice, and adaptive strategies, brands remain ahead of the curve.

9. Siloed Data Interpretation:
Our holistic dashboard amalgamates diverse metrics, offering brands a cohesive narrative. This unified perspective, enriched with data visualization tools, ensures brands grasp the intricate dance of different metrics and their cumulative effects.

10. Long-term Strategy Struggles:
We believe in a 360-degree approach. By synergizing historical data insights with forward-looking predictive modeling, we ensure a brand’s short-term tactics seamlessly merge with its long-term visions.

Insights for the Dynamic World of Digital Marketing

In the dynamic world of digital marketing, a brand’s survival hinges on its adaptability and informed decision-making. With Foundry’s Insight Packs, brands are equipped with a compass and a roadmap. As they navigate the tumultuous terrains of the digital realm, Kochava ensures their journey is not just safe but also supremely successful.

Visit Kochava.com/Foundry-Insight-Packs/ to learn more about Insight Packs and request a free consultation.

The post Navigating the Ad Spend Jungle appeared first on Kochava.

]]>
How Apple Search Ads + SKAdNetwork Upended iOS Marketing https://www.kochava.com/blog/how-apple-search-ads-skadnetwork-upended-ios-marketing/ Tue, 21 Mar 2023 17:16:34 +0000 https://www.kochava.com/?p=48567 The post How Apple Search Ads + SKAdNetwork Upended iOS Marketing appeared first on Kochava.

]]>

iOS marketers have been forced to evolve

For many iOS app marketers, Apple Search Ads (ASA) is no longer an optional line item in their ad spend budget. Brands must bid on their own brand keywords or risk losing visibility in the App Store. Throw in Apple’s Store Kit ad network (SKAdNetwork), which severed the adtech industry’s near real-time feedback loop, leaving brands guessing, struggling to understand the relationship between their paid media and conversions, and you have the perfect catalyst forcing even more demand into ASA.

It’s the perfect storm, and it’s all thanks to Apple’s AppTrackingTransparency (ATT) framework and SKAdNetwork. Thankfully, you don’t have to  passively accept the new normal. To take control of your performance potential, consider implementing a strong test-and-learn strategy and partnering with a mobile measurement partner (MMP) like Kochava.

ASA’s ascension

Let’s start by looking at a single advertiser. They’re a streaming entertainment brand we’ve all heard of that has been around much longer than I’ve been alive. This brand is likely front of mind for most users in the U.S. when they think of “Entertainment Media.”

Three years ago, this brand spent ~$2.4M monthly on iOS user acquisition (UA) for their main app. The spend was spread across 17 networks, and the UA team had a strong test-and-learn strategy. The networks at the time included DSPs (e.g., Aarki, Liftoff, ironSource, Molocco), self-attributing networks (SANs) (e.g., Facebook/Instagram, Google, Snap, TikTok, etc.). At the time, ASA was 13th on the list, with a monthly spend of ~$44k.

Fast forward to the present (Q1 2023) – the brand now runs on two networks:

  • ASA $830k
  • Owned media (via Smartlinks)

As you can see, their growth has greatly flattened.

install and quality by network Q1 2019 graph
install and quality by network Q$ 2022 graph

This is a sad new reality for most brands: 38% of iOS install attributions are now awarded to ASA. And when we break down the keywords tied to that spend, 95% are against their own brand’s term.

ASA keyword type by spend graph

What is being purchased with ASA? A light blue ad with white text that the user likely doesn’t even know is a paid placement? It’s reasonable to assume that most users were looking for the brand they knew but were snared by ASA. If you don’t bid against your own brand, your competitors will.

The recent redistribution of media

Here’s the broader view of how media attribution has been redistributed over the past two years in the iOS advertising ecosystem.

IOS attribution redistribution graph

What the heck happened? The ATT framework and SKAdNetwork happened. ATT enforcement hit at the end of April 2021, and by May, most devices were capable of carrying the ATT prompt. SKAdNetwork started to see wide adoption in the summer of ’21.

A quick recap of direct response mobile marketing

Let’s take a beat for a history lesson on direct response mobile marketing.

Within the mobile marketing ecosystem, an MMP provides the ability to track the relationship between an advertiser’s conversions and their paid media. In the case of Kochava, the advertiser implements our software development kit (SDK) into the app, and to the extent the advertiser wants to track conversions, they implement those conversions to be read by our SDK. These conversions could be installs, post-install registrations, paid sign-ups, purchases, etc. 

The advertiser (brand) comes into the MMP platform and creates campaigns to be trafficked by the ad networks. Links are used when ads are served, and different links are used when users click on or interact with the ads. Additionally, Kochava has control servers located all over the planet, allowing us to read an ad signal in real time. Our SDK can also read the conversions that advertisers wish to tie back to media (installs, registrations, purchases, etc.) in real time. Depending upon the advertiser’s attribution waterfall logic (which is fully customizable with Kochava), we answer the question of what the ads did relative to the target conversion(s), and we can syndicate that answer anywhere on the planet within 120ms. Then came SKAdNetwork.

Now back to SKAdNetwork

If you’re interested in the detail of how SKAdNetwork works, I’ll refer you to this page, which has a wealth of resources on the topic.  

In summary, the 120ms feedback loop that the mobile adtech industry relied on was severed; iOS campaign results via SKAdNetwork are mostly NULL (more on that later), but even when visibility occurs, it’s delayed and aggregated. So whereas the programmatic engine of a brand’s mobile advertising was making decisions in near real time, with SKAdNetwork they now have to wait days to get a partial answer at best.

Bottom line: Unless consent is captured via Apple’s ATT framework on both the publisher and advertiser apps, the only way to describe the relationship between your paid media and iOS conversions is through SKAdNetwork.  

Ostensibly, ATT and SKAdNetwork are focused on consumer privacy and removing the ability to triangulate persistent identification of individual records using metadata like timestamps and device identifiers without the end users’ consent. All of the ad data required for attribution is sent directly to Apple servers from the OS (no middleware), such that no massive customer leakage can occur and it severs the measurement-targeting feedback loop marketers were used to.

Do consumers feel better about it? Probably, for those paying attention. But we have not seen a migration from Android to Apple. The world is still roughly 27% iOS / 73% Android, with a heavier skew in the US at 56% iOS users. This hasn’t materially changed since ATT and SKAdNetwork 2.0 were announced in June of ’20 at Apple’s Worldwide Developer Conference.   

So, SKAdNetwork has created a lot of challenges for marketers. It’s opaque. It’s slow. And it’s so generalized as to be meaningless for performance insights in most cases, particularly SKAd 2.0 and below (which is sadly where most publishers are still stuck in terms of their adoption).

But the solution for many brands is obvious: Run on ASA! It’s real time, it’s deterministic, and it doesn’t live within the confines of SKAdNetwork’s rules. Whether any of us actually believes that ASA deserves 40% of the media funnel is a topic for another discussion. 

SO WHAT DO YOU DO?

My first recommendation is to perform a test – pulse your ASA campaigns and see what happens. A two-week pause should do it in most cases. As you stopped spending on ASA, did you see your overall conversions decrease? You might. You might not.

But let’s say you do see a bump – meaning what you spent can be seen to have driven installs. What then? You need a way to understand the incrementality behind your spend; conversions tied to ad spend should increase or decrease reasonably as the spend moves. We can help marketers observe these fluctuations with our new Always-On Incremental Measurement (AIM) product. 

AIM sits above the direct response attribution data marketers are accustomed to, applying media mix modeling and AI atop custom models built using two main aggregate inputs:

– Network cost/day/region/app

– Conversions by day/region/app

With always-on incremental insights, you can decrease unwarranted cannibalization, reducing your costs without starving genuine and incremental growth.

ASA is good for ‘closing the deal,’ and as long as it sits with unique advantages over other paid channels on iOS, we may as well all play along in trying to conquest competitors’ brands.

(Not to date myself, but I used to send cease & desist orders to other brands that were bidding on my brand’s keywords in Google search. It worked most of the time. The reason being twofold; it’s my brand, not theirs, and they could do undue harm depending on the offer; and overall channel confusion. Of the 14 times we sent a letter, it worked 12. Note, this was typically around the descriptions in the marketing language of the keyword – which doesn’t really apply in Apple’s case.)

There are some real wins to be had, particularly in the generic and conquesting spaces. Understanding the quality of the performance tied back to the keyword is critical. It’s no big surprise that users searching your company name likely have a higher take rate in the app. After all, they ‘raised their hand’ to be part of the brand and got snared by a search ad, but they self-qualified, and it’s expected they would be a better long-term customer (i.e. higher quality). On the other hand, we see the quality really drop off below the brand terms – with a couple of exceptions in the conquesting space. This is where the opportunity lies, and it’s why Kochava built Search Ads Maven.

Search Ads Maven for ASA Optimization

Search Ads Maven is an ASA campaign management platform that uses data connectors with Apple, app store optimization and keyword intelligence, and MMPs to automate your keyword bidding optimization based on install performance and lower funnel key performance indicators that are deterministic signals.

Search ads maven offerings graphic

This is possible because ASA doesn’t live under the boot of SKAdNetwork. As such, you can do some VERY interesting things with it. For instance, the attribution is both real time and deterministic, plus the keyword code is actually written to the app binary on download and can be retrieved by the MMP. What does this mean?

Search ads maven attribution funnel

It means there is a keyword tied to an actual device, thereby allowing post-install performance to be tied back to the keyword.

Quality in the keyword analysis table

Quality in the keyword analysis table above refers to the number of installers who had a paid subscription within 7 days of installing. It becomes clear the ‘quality’ is much higher on the branded terms – these are the users who ‘raised their hand’ to the brand by searching for the actual company name (i.e., a brand effect, not a media effect). But there’s value to be won within the Generic and Conquest buckets as well, where we see significant variance in the quality of the customers. Not much can be done about winning more brand conversions, BUT building a solid, deterministic test-and-learn strategy within the non-brand terms is possible.

What no ASA marketer wants to see is an out-of-control keyword bidding war driving their ROAS into the red. With Search Ad Maven’s Automation Studio, marketers can customize rule logic based on a variety of triggers, including ROAS that takes into account in-app revenue or other custom goals based on post-install event data piped in from their MMP. Keyword bids can be paused when the winning bid cost drives ROAS into the “pit of despair,” as we like to call it, but then resumed again once positive ROAS territory is reached. Typically, this is something marketers only observe after the damage is done, but with Search Ads Maven, your positive ROAS can be maintained around the clock, 24/7/365.

attribution rule example graph

Concluding thoughts

Mobile advertising has been forever altered on iOS. The days of funneling the majority of your spend to the duopoly as a sure bet are no more. 

Don’t get me wrong; there are opportunities using SKAdNetwork, but it is confusing. We can help with an in-depth consultation that looks at your iOS app(s), understands your KPIs and business objectives, and tailors a SKAdNetwork configuration strategy that will squeeze out as much insight as possible. Just keep in mind that the vast majority of publishers and ad networks still aren’t set up to support SKAdNetwork, so your media mix options are limited. 

If you’re ready to play the game on ASA (and really, you must at some level), don’t do it without a solid MMP at your back and a tool like Search Ads Maven to automate the actions based on keyword performance tied to lower funnel KPIs. There’s amazing potential for spend optimization, but doing it manually is painstaking when you have hundreds or even thousands of keywords. 

Want to stay up-to-date on industry trends, like those discussed in this article? Visit www.kochava.com/adtech-trends/

The post How Apple Search Ads + SKAdNetwork Upended iOS Marketing appeared first on Kochava.

]]>
Obtaining Incremental Lift In the Wake of Adtech Data Aggregation https://www.kochava.com/blog/obtaining-incremental-lift-in-the-wake-of-adtech-data-aggregation/ Wed, 29 Dec 2021 00:42:54 +0000 https://www.kochava.com/?p=42434 The post Obtaining Incremental Lift In the Wake of Adtech Data Aggregation appeared first on Kochava.

]]>

The loss of row-level data doesn’t mean the end of accurate  performance insights

The adtech industry has been undergoing monumental changes in how it accesses, measures, and analyzes data. While there have been, and continue to be, disruptions in how advertisers receive campaign data, there are solutions to obtaining detailed insights through incrementality testing. Although traditionally cumbersome, modeled synthetic control groups enable accurate and more affordable lift measurement.

In today’s real-time attribution system, the question of causality, that is, did an ad cause a conversion, is left unanswered. Real-time attribution simply tells you the user was in the vicinity of the media served; lift measures the value of the media itself. The two are not interchangeable. 

To answer the question of impact means performing incrementality testing, which historically has been time-consuming and costly. With Kochava Foundry’s MediaLiftTM, however, we can perform incrementality testing using modeled synthetic control groups.

The benefits of using synthetic control groups

With traditional incrementality testing, an advertiser must withhold advertising from a holdout (a.k.a. control) group which is often 10% to 20% of the total addressable target audience. Withholding advertising to a portion of your audience is costly as you are potentially losing revenue from them. 

 

MediaLift avoids this by leveraging an innovative device scoring system within the Kochava Collective identity graph to build a modeled synthetic control group that mirrors the attributes and behaviors of the exposed group who received the ad. 

Device scoring also eliminates having bias creep in between the exposed and control groups, which is an inherent problem with traditional incrementality testing. When the control group is carved out before the campaign is run, the final test group that actually gets reached by the digital ad campaign often ends up looking a lot different than the control group.

Another costly aspect of traditional incrementality testing is the serving of public service announcements (PSAs). PSAs are often served to the control group as a way to compare their behavior against the exposed group’s. However, this practice is not even possible with some marketing channels such as with out-of-home (OOH) billboards. 

Because MediaLift’s synthetic control groups are based on devices that have exhibited similar behavior to the exposed group’s, no PSAs are needed to compare behavior between the two groups. This ability lends itself to channels where measurement has been based more on estimates, such as OOH and television. By applying the mobile data available in the Collective, advertisers can see how OOH and/or connected TV (CTV) campaigns influence their users on mobile.

  • Minimized opportunity cost: no “holdback” groups who may have otherwise received ads (in the case of a pure holdout)
  • No hard cost on having to spend media on a non-responsive group (in the case of PSAs)

MediaLift incrementality testing for OOH

Like many other industries in 2020, the out-of-home (OOH) industry took a hit, but it has been healthily rebounding in the past year. As the OOH industry recovers (with OOH still leading the way in comparison to digital OOH), the ability to measure it alongside mobile will help amplify your marketing strategy. The two channels no longer have to be siloed, and measurement will show their correlation.

Billboard publishers have been using MediaLift to prove the efficacy of their clients’ ad spend. To perform incrementality testing, the Kochava Foundry team defines a universe of known devices that were in the vicinity of the billboard display. This group represents the exposed group eligible for attribution.

Outdoor signage and incrementality lift

With the geo-location of the campaign billboards, the Foundry team isolates the devices that were in the vicinity of them. Using the Device Scoring system in the Collective, they create a modeled control group that mirrors the exposed group in terms of device variable composition, geography, user behavior, etc: These two groups are theoretically the same, with the exception that one of the groups encountered the OOH ad(s).

To validate the control group, the team matches the devices of both groups based on the score. Next, they look at performance to see how the two groups differ before and after the ad was displayed. In the graph below, we see the exposed and control groups overlayed with each other and it’s clear that the two mirror each other in behavior and are unbiased prior to the OOH ad campaign exposure. A separation may occur after the media is displayed. Sometimes, the separation is greater as shown below. In the graph, it shows the top group engaged but not the control group, the reason for this was a natural downtrend in the business.

MediaLift Incrementality testing

In addition to measuring impact, the team measures the number of exposure events, meaning events derived from devices exposed to the billboards, the number of times a device was exposed to the media, and lift specifically from ad exposure. 

MediaLift for CTV

Connected TV (CTV), television that runs on the internet, is rapidly growing as the primary mode to access entertainment, and likewise, brands are adding it to their media mix. With major changes in the industry regarding user data privacy coming from Apple, Google, and Facebook, many advertisers are likely reallocating some ad spend to CTV as well. 

CTV and over-the-top (OTT) streaming services were already increasing in popularity before the pandemic, but it has catapulted their growth since. CTV is arguably the next emerging market to capitalize on much like mobile was back in 2011 and 2012. The beauty of its growth now is that there are mature mobile measurement tools to measure the second screen trend capturing the influence of CTV on mobile. Additionally, the level of insight on a household basis is highly detailed.

To measure lift on CTV, the Foundry team create two similar groups of eligible devices as they do for OOH campaigns. For CTV, they can use IP addresses to determine the universe of devices eligible for attribution within a specific region and then use that group to create the control group from the Collective. The data is scrubbed of phone carriers and devices that don’t have CTV capabilities until they are left with devices by household. Once they have their exposed and control groups, they can determine lift.

Application of MediaLift results

Although incrementality testing has been MediaLift’s focus, its applications go beyond it. Advertisers can upload their lift results in the Collective to see where there are mismatches between the exposed and control groups for improved targeting. MediaLift insights also include the ability to attribute conversions to a billboard or other outdoor signage (also called a “gross match”), such as airport signs, taxis, elevators, etc. It is also useful in observing trends in a new market. 

Is MediaLift right for you?

With the right data available, MediaLift is a more efficient and affordable way to determine lift and campaign impact to answer causality. To obtain a MediaLift analysis, advertisers and publishers need to supply an ad signal (data stream of impressions and clicks) and a conversion asset (eg, install), and a way to tie the two which is typically an IP address.

Keep in mind that one analysis is not evergreen but is a snapshot of distinct moments of activity, so periodic assessments are more practical to see the history of marketing’s impact. 

For more information about MediaLift, visit the Kochava Foundry page, or contact us or your Client Success Manager.

Grant Simmons – VP of Kochava Foundry

The post Obtaining Incremental Lift In the Wake of Adtech Data Aggregation appeared first on Kochava.

]]>
Incrementality vs. Having a Test & Learn Approach https://www.kochava.com/blog/incrementality-vs-having-a-test-learn-approach/ Thu, 21 May 2020 21:45:07 +0000 https://www.kochava.com/?p=28593 The post Incrementality vs. Having a Test & Learn Approach appeared first on Kochava.

]]>

Measuring true incrementality requires a commitment of time and resources, but advanced A/B testing can determine the effectiveness of campaigns without the same undertaking.

Increment Graphics

A UA manager runs a campaign for $50K that reached 500K consumers. Of those, 30K installed. Was the campaign successful? 

To answer that, you need to know how many people would have purchased in the absence of seeing the ad. This is what incrementality testing promises to answer, but to get it, marketers must endure an often complicated and expensive process. There are other, more affordable avenues that can also answer the question of how effective are your ad campaigns. 

The answer—like most things— is “maybe.” But oftentimes, attribution is confused with incrementality. Yet they are very different measurement approaches, and each seeks a different outcome.

Incrementality and lift

Incrementality has become a buzzword of late as marketers want to not simply measure campaign outcomes but determine whether their ads truly influenced conversions. The problem is, what they’re asking for may not be what’s most practical for them as a business. Performing an incrementality exercise requires a commitment of time and money, which also includes an opportunity cost.

What is true incrementality?

To measure incrementality (aka lift or causality), you need to measure the amount of consumers who would have converted (ie, purchased) regardless of whether they saw your ad.

One thing to clear up—performing an incrementality exercise is not the same as attribution. A click does not drive an install, as is commonly discussed in the ecosystem. Too often, we have equated a consumer interaction with an ad with direct correlation to an action (event), but correlation does not equal causality. There are many factors that drive an install, and we’ll never know all of them. 

Incrementality testing oftentimes involves segmenting an eligible audience from which you carve out a holdout or control group. This group is suppressed and receives no advertising. You then advertise to the other half and compare conversion rates. This is where incrementality testing starts to get tricky. 

From the group that received advertising, you can’t verify that everyone in that group saw the ad. Of those who saw the ad, you still won’t know if some in the group are brand loyalists and would have converted regardless of seeing the ad. Additionally, of those who received your ads, there is the likelihood of data bias since you are competing for the same pool of high quality consumers as other advertisers. Chances are, you will win more bids for lower quality consumers. Lastly, to measure lift, requires multiple tests, which is costly in addition to the opportunity cost lost from not advertising to the holdout group.

Let’s talk about PSAs and ghost ads 

When you perform an incrementality test with a holdout group, you can’t compare the outcome of an ad campaign fairly because they haven’t been served any ads. To create a fairer comparison, you can split the holdout group and serve public service announcements (PSAs) or ghost ads (flagged consumers who would have been served an ad) and then compare their behavior. Bids must be the same as the group you advertised to because you want this population to look like the ones who saw your ad. 

In spite of the efforts to create an apples-to-apples comparison between the two groups, the groups still look different because with PSAs there is no call to action (CTA). The PSAs act as a placeholder to see if what consumers do after they see a PSA is on par with what those in the advertised group do. However, the two segments won’t have had the same ad experience (as with a CTA). 

Possible comparisons

How likely is it that two populations will match?

Comparing testing populations for incrementality testing

What I’ve outlined above doesn’t paint a pretty picture of incrementality testing. It’s not to say that you can’t do it, but it’s important to lay all the cards on the table and be clear about what it entails since there are some misconceptions about how it works, the perceived value, and the costs. What eventually answers the question of incrementality is time and repetition to create reproducible results and see what factors caused a lift in a campaign. 

Viable solutions: A/B testing with a verified data set

In lieu of incrementality, there are a number of more cost-effective alternatives to determine campaign impact through performance.

Going back to the example at the beginning, a UA manager could create an audience of 500K consumers and suppress a segment of those consumers as the holdout group. They could advertise to the other portion and more easily compare the outcomes of that group with the history of the holdout group for say the past 30 days. 

Other analysis options include: 

  • Time series analysis: This type of analysis involves alternately turning marketing off and back on to establish a baseline and to see incremental lifts from networks. Although effective, there is an opportunity cost in turning off all marketing temporarily. 
  • Comparative market analysis: Analysts define a designated marketing area (DMA) to find geographical pockets that behave similarly. They then surge the marketing in one DMA and refrain from the other. There is a strong chance of seeing conversion rate differences between the two DMAs but also an opportunity cost in surging marketing in one DMA and withholding efforts in the other.
  • Time To Install Quality Inference: This analysis compares the time of engagement vs. the quality of user graphs to easily understand what is causal or non-causal. While there is no opportunity cost, this type of analysis is less precise than others.
  • Forensic control analysis: This type of analysis is a modeling exercise in which a control group is created that mirrors the exposed group after a campaign has run. The response and performance is weighted up or down based on an algorithm (created from predictive variables). While there is no opportunity cost, copious amounts of data are required to create the model universe. 

Most important: Adopt a test & learn mentality

Having a known universe of devices between a group of consumers exposed to ads or not is what’s difficult to obtain with incrementality testing. While incrementality testing is possible, its feasibility is another story. Know your threshold for testing and perhaps consider some of the options outlined above to measure success. Overall, adopting a test-and-learn mentality is what leads to successful marketing.

Interested in learning more? See how our team may help yours through our consulting services.

The post Incrementality vs. Having a Test & Learn Approach appeared first on Kochava.

]]>
Data Privacy: The New Balancing Act for Brands https://www.kochava.com/blog/data-privacy-the-new-balancing-act-for-brands/ Tue, 12 Nov 2019 17:54:15 +0000 https://www.kochava.com/?p=24791 The post Data Privacy: The New Balancing Act for Brands appeared first on Kochava.

]]>

Can you comply and thrive amidst emerging data regulations?

The news lately is full of data breaches, and those from major corporations—and the tech giants—have thrust consumer data privacy into the political arena. The impact of these breaches has brought about much awareness and scrutiny over the commercial use of personal data. With the advertising industry still grappling with the General Data Protection Regulation (GDPR) in the EU, more regulations, such as the California Consumer Privacy Act (CCPA), are impending stateside. These well-intentioned, but often contradictory policies have created the perfect storm for industries like digital advertising, who rely on data for business.

Data Privacy Map

The current legal landscape in a state of flux

Europe’s GDPR, implemented in May 2018, began a ripple effect for businesses around the globe. It has led many companies to withhold business in the region. Several landmark penalties have already been handed down. Since its enforcement, the Information Commissioner’s Office (ICO) has fined companies a total of $397M. 

Adding fuel to the fire of data mishandling have been mishaps at several tech giants including Facebook, Google, Microsoft, and Amazon. All have had data breaches or their data has been misused or exposed. Commercial breaches by Experian and now CapitalOne have also exposed sensitive consumer data—not to mention breaches by smaller companies, which are even more vulnerable to hacks.

While there is agreement about the need to protect data privacy, the question of how is embroiled in deep debate, particularly as the consumer fallout from these breaches has yet to be fully understood. Consumer data is vulnerable; companies who require data to function and provide their services now must earn consumer trust (in addition to complying with regulations) in order to succeed.

Confusion and contradictions

In the United States, there is no overriding federal legislation that protects the data of individuals. All states have some form of data breach notification laws but they vary in what is covered or required. CCPA has made headlines for its data collection limitations and consumer involvement over how collected data on them is used. It’s been compared to GDPR and goes into effect in 2020. Although CCPA has garnered the most attention around state-specific regulations, Nevada has already passed legislation about how companies inform consumers of personal data collected.

At the federal level, Senator Brian Schatz (D-Hawaii), of the Senate Communications, Technology, Innovation, and the Internet Subcommittee introduced the Data Care Act last year which would require advertisers to protect consumer data in the same way the healthcare, legal, and financial industries are required to do.                                  

Several significant advertising industry groups concerned about conflicting and contradictory aspects of regulations have joined the “Privacy for America” coalition. The coalition supports broader privacy rules, restrictions on certain data practices, new oversight protection and laws, increased rulemaking authority for the Federal Trade Commission, stronger data security protection, and penalties for violations. It is advocating to revise aspects of the CCPA referring to a requirement that non-identifiable text IDs be tied to a device, thus revealing personally identifiable information ((PII) data that may identify an individual).

Flawed intentions

A common misconception of the advertising community is that individuals are invasively tracked—almost spied on. Yet, the advertising industry largely relies on anonymous, unique device identifiers in serving and tracking ads. In mobile advertising these are called mobile ad identifiers (MAIDs). MAIDs don’t reveal PII and they can be refreshed or blocked by users, making them a safer form of identification.

While the intentions behind regulations to protect consumer data are valid, the logistics are flawed. One unintended consequence of some emerging regulations may be the collection of more sensitive consumer data than is currently tracked. For example, as noted by the Privacy for America coalition, in order to comply with a requirement to provide collected personal data to consumers who request it, adtech/martech companies would have to tie normally anonymous identifiers with personal information. To provide adequate services by individual preference, for a GPS service app, for example, apps may resort to requiring personal information if they cannot use anonymous device identifiers. This would put consumer data at a significantly higher risk for unintended exposure.

Another unintended consequence of limiting anonymous identifiers is it opens the door for attribution fraud. The use of probabilistic attribution (see iOS 14+ restrictions) makes devices and ad campaigns susceptible to fraud. It enables fraudulent entities to receive payment for their schemes which steals from advertising budgets and misinforms business decisions. If a fraudulent entity hijacks a phone, it can affect that consumer’s battery and cellular data.

Comply and thrive

While the ways of collecting consumer data are changing, businesses can be proactive about compliance. Advertisers can begin implementing transparency by being clear about the information being collected, how it will be used and protected, even while best practices are yet to be determined. 

Because advertising has taken a beating in the public eye, it’s important to prove trustworthiness. Advertisers have access to an incredible amount of consumer data. Providing transparency by using discretion with sensitive information, having safeguards, being upfront and clear about the data being collected, and obtaining adequate consent are all prudent steps advertisers can take.

If your company hasn’t needed to implement GDPR regulations, using them as a guideline is a good foundation in preparing for US state/federal regulations. Much of the focus is in obtaining proper user consent and being clear about permissions. Marketers will also need to consider how to comply with opt-in (where users agree to have their data sold, as with GDPR) versus opt-out (where users must tell businesses not to sell their personal data, as with CCPA) regulations. It may be wise to avoid blanket consent requests, such as a laundry list of user permissions to install an app, because it leaves the data vulnerable to misuse. 

Become more strategic

There will be growing pains as more regulations are confirmed, but look for the silver lining, too—reevaluating what and how you collect data may make you more strategic and efficient. Advertisers should consider what data they collect and why. By eliminating what you don’t need, you’ll reduce risks for privacy violations and data bloat. If you’re an advertiser who uses data to enhance the user experience, be conscientious of the data needed to make that happen. Considering the data you collect internally may also eliminate the need for third-party vendors or at least allow you to become more selective about whom you work with.

With GDPR, most of the advertising industry is already navigating the uncharted waters of data regulations while governing entities work to protect misuse of personal data. Considering data consolidation and working with fewer vendors may aid in preparing for stricter limitations on data collection. Upcoming and emerging policies will no doubt impact businesses, but being mindful of what data is needed for marketing will make everyone more efficient. 


Grant Simmons – Head of Client Analytics
Kochava

The post Data Privacy: The New Balancing Act for Brands appeared first on Kochava.

]]>
Get the Right Signal From Your Media Partners https://www.kochava.com/blog/get-the-right-signal-from-media-partners/ Wed, 02 May 2018 22:00:48 +0000 https://www.kochava.com/?p=13377 The post Get the Right Signal From Your Media Partners appeared first on Kochava.

]]>

Marketers work under the assumption that their efforts influence user engagement. But is there evidence to support that assumption?

The answer lies in the quality of the datastream—the signal—sent to us as a measurement provider. By analyzing its quality, we can make better observations on the relationship between marketing and user engagement.

A typical signal

It’s reasonable to believe that there is a correlation between your marketing efforts and the effect of those efforts – meaning, your signal (clicks) should correlate with the effect (attributed installs).

We oftentimes see there is little to no correlation between the signal (clicks) and attributed installs. This makes it difficult to infer causality between paid media efforts and the attributed effect.

Traffic Curve
Clicks v Attributed Installs

If we plot a trendline for the data, we get a correlation (R Squared) of 0.37—if the volume of clicks and installs attributed to those clicks were perfectly correlated—we’d see an R-squared of 1; and completely uncorrelated would be 0. Unfortunately, the graph above represents what we typically see: showing there is a poor but NEGATIVE correlation between clicks and attributed installs. We don’t believe that is reasonable.

This kind of discrepancy makes it difficult, or impossible, to plan media spend. We should be able to extrapolate the clicks required to obtain a certain amount of installs. With data like this, how do you budget your ad spend?

To rectify this, the data needs to be cleaned up. Attribution relies on a good signal; If you can’t trust your signal, you can’t trust your attribution. And if you can’t trust attribution, you can’t trust measurement.

With a poor signal, you’re at risk for attribution fraud as a result of click injection. Think of the industry we work in; There are many incentives for fraud because of the last-click attribution model.

A poor signal may also be reflective of an overly broad lookback window that inaccurately reflects cause-and-effect between advertisements and user engagement.

Lastly, it may be the result of media partners sending a mixed signal (impressions as clicks).

Signal clean-up

Analyze each of these areas in cleaning your attribution signal to improve your campaign results:

  • Have media partners send impressions and clicks separately
  • Shorten lookback windows
  • Implement quality control metrics to ensure clean data
  • Measure media partner quality

Take a step back from key performance indicators and look at your signal. Is there a positive relationship between your clicks and installs? Do more clicks result in more installs? Is there any relationship at all? If not, there’s work to be done, and Kochava can help.

To read more about how to interpret and clean an attribution signal, read, “Having A Poor Signal Results In Poor Measurement,” by Grant Simmons published on Medium.

The post Get the Right Signal From Your Media Partners appeared first on Kochava.

]]>
Market Incentives That Drive Fraud: The Truth Behind Reach vs. Frequency https://www.kochava.com/blog/incentives-for-fraud/ Tue, 12 Dec 2017 13:43:11 +0000 https://www.kochava.com/?p=10999 The post Market Incentives That Drive Fraud: The Truth Behind Reach vs. Frequency appeared first on Kochava.

]]>

Last Click Attribution is an Incentive for Mobile Ad Fraud

See Grant’s post as originally published on Medium. For an in-depth version of his thought-piece, see below.


As the digital advertising ecosystem becomes educated on mobile ad fraud, it’s important to understand that fraud comes from a variety of different angles. The majority of fraud is actually a side effect of the way attribution is performed. It is, interestingly enough, directly correlated to the predominance of mobile app install ads being priced on a CPI basis.

Roughly three-fourths of the fraud detected by Kochava is characterized as attribution fraud—where the install is legitimate, but fraudsters attempt to get credit for either organic traffic or installs driven by another network partner. Tactics to game the attribution system include click spamming, click stuffing, ad stacking, and many other techniques.

The remainder of the fraud we detect is ‘manufactured’, where the device or install itself is questionable. The inordinate majority, though, deals with publishers attempting to get credit where little or no credit is due. The tactics to game the attribution system here include click spamming, click stuffing, ad stacking, and many other techniques. If you’d like to explore the various algorithms we use to detect and prevent attribution fraud, I encourage you to read the previous posts in my Fraud Abatement Series. The purpose of this post, however, is to explore the incentives that lead to the remarkable amount of attribution fraud we’re seeing across the industry.

In my prior life, I led a team of measurement analysts focused on campaign performance at one of the largest software companies in the world. We strove to express performance in terms of incremental lift. This meant asking the question: Does touching a household with media have a measurable effect vs. an identical household that was not touched by media? Or: In the absence of an ad, how many people would have taken the action anyway? This post won’t detail the challenges around measuring incremental lift in digital (there are many) but instead will take the learnings from more sophisticated measurement techniques and apply them to direct-response mobile marketing.

First impressions matter

One of the more interesting artifacts from incremental research was how much lift was generated by impression. With enough data, it was possible to detail the incremental effect of the first, second, third impression, etc. Invariably, the first impression does the most ‘work’ in influencing behavior. This makes sense when the customer has no prior engagement with the brand. It’s reasonable that the first time the user is exposed to an offer has the most effect.

Additional marketing touches are influential, however; while the first impression is the most important, it often takes multiple cumulative impressions to get someone to ‘pull the trigger’ and convert.

In the first graph, the overall lift for this campaign was $20. This was a cumulative lift based on impressions 1 through 5. In the second graph, when we calculate the delta between each impression, we see that the first impression had the most lift.

CumulativeLift$IncrementalROI
Incremental Lift by Impression

What does incremental lift by impression have to do with direct response marketing with the intent of driving app downloads? And, how does this relate to the amount of fraud we’re observing?

The short answer: Networks are incentivized to have the last click, not to reach the most prospects. While the networks should be maximizing their reach in making those valuable first impressions, networks instead focus on frequency in order to win the last click.

Where no lift was observed, we often found that the ads weren’t viewable. Also, partners like Moat Analytics or Integral Ad Science (IAS) provided clues as to whether the impression was even seen by a human. In desktop/web marketing, that was as far as we’d go for fraud detection.

Viewability of mobile ads is certainly important but difficult to implement, particularly in-app. Also, the mobile app world as a whole does not account for impressions, so whether the ad is viewable or not, ingesting impression data is generally the exception to the rule.

Last click may not be the most influential

Direct response attribution has largely been borne from the demand for nearly instant feedback loops. Marketers—particularly the large ones—have demanded that advertising networks attenuate to signals in real time and adjust the ad mix accordingly. In order to have instant feedback, an install needs to be instantly adorned and posted back to the ‘winning’ network. To determine the last click, there is a waterfall hierarchy. For instance, a device ID match has higher integrity than user agent with IP, along with the ability to adjust the attribution window to what the marketer deems reasonable. However, the bulk of matched winners rely on last click attribution.

Determining attribution based on the last click casts a blind eye to the valuable touchpoints preceding it. A potential prospect may have viewed an ad on Facebook, watched 30 seconds of an advertising video on YouTube, played a minigame promoting the target app inside of a game already installed, and clicked a static banner in their web browser. In this scenario, the install will have been attributed to the last click interaction, but it’s reasonable to believe that each of the media touchpoints played a part in converting the prospect. So, where should a network put their efforts?

With upper-funnel user acquisition, the marketer is most benefited by maximized reach: To serve as many first impressions as possible to the largest possible population of prospects. But the network incentive is to have the last click, so instead of maximizing reach, the dollars lie in maximizing frequency. The industry has created an incentivization framework that keeps networks from doing what they should be best at. This has resulted in networks with the click spamming and attribution fraud everyone is witnessing.

Industry perceptions of attribution must change

All of this is to say that the attribution model should be improved to better reflect the most influential touchpoints prior to the install. There is value in collecting all of the touchpoints leading to an install, but collecting all the data is the hard part. While the measurement framework may not be ideal, the necessary elements are in place to improve. Here’s where I see opportunities for improvement:

1. All networks should pass both impressions and clicks. There is a ton of value in understanding not only who’s clicked an advertisement, but who’s been reached at all. To some extent, an impression does some amount of ‘work’ in the path to conversion, and we can’t account for it if we don’t have it.

2. As an industry, we should have the ability to ingest ad types, sizes, and consumption metrics, and to standardize the ad data. It’s a reasonable assumption that video ads do more work than static banners, or that one minute of video watched does more work than five seconds, tracking those details is imperative.

3. Marketers should use fractional attribution. Fractional attribution offers the ability to assign an increased or decreased weight on the value of a network’s traffic. While few marketers consider using fractional attribution, Kochava customers can fractionally allocate attribution credit or even use multi-touch attribution (MTA) models. They can assign more weight to video view-throughs, for instance, even when the video view wasn’t the terminal action in the path to download.

In the graph below, the networks are segmented by their overall uniqueness and the dollar value (quality) of their attributed installs by how many other influencers were involved (1-5). The purpose of the model is to estimate how much a marketer would pay for a unique, target install. For example, if Network A had a total of 231,487 unique installs, the marketer would pay $463,352 ($0.50 cost per install). That amount decreases as the number of influencers increases. Moving forward, marketers can set a target and use uniqueness and quality (amount of revenue generated) of installs to weight attribution per network, thus, moving away from last-click attribution to one network. In some cases, the marketer might be paying more for traffic, but it would be more unique and higher quality.

An example of MTA rewarding uniqueness and quality is shown below:

Influencer Network Count

While MTA models aren’t perfect, they are a step in the right direction. They won’t eliminate fraud, but being able to see impression volumes and ad types allows marketers to make better decisions. At Kochava, we’re working on creating more precise MTA measurement than what I’ve described above, and I anticipate we’ll have some in-market examples in the coming months.

4. Measuring incremental lift generated from an ad unit is the ideal. But, incremental measurement is remarkably difficult and expensive in digital, and even more so in mobile. The results are often inconclusive, even in a perfectly executed campaign.

If attribution is about finding the channels that drove an install, then we need to consider more touchpoints outside of the last click.

About the Author

Grant Simmons is the Director of Client Analytics at Kochava and leads the team in analyzing campaign performance and business value assessments. He is the former head of Retail Analytics at Oracle Data Cloud where he worked with over 1,500 retail directors, VPs, CMOs and agencies to develop individualized test-and-learn strategies.

The post Market Incentives That Drive Fraud: The Truth Behind Reach vs. Frequency appeared first on Kochava.

]]>
Fraud Abatement Series #5—The Kochava Global Fraud Blocklist https://www.kochava.com/blog/global-fraud-blocklist/ Wed, 05 Apr 2017 17:50:52 +0000 https://www.kochava.com/?p=8723 The post Fraud Abatement Series #5—The Kochava Global Fraud Blocklist appeared first on Kochava.

]]>

The tools Kochava uses to identify fraud are part of the Fraud Console, which is comprised of a comprehensive Global Fraud Blocklist and reports. The Global Fraud Blocklist consists of three components: network/site IDs, IP addresses and device IDs that have been flagged by our algorithms. It acts in real-time and dynamically updates as new fraudulent entities are identified across all Kochava traffic.

In addition, customers can add their own site IDs, IP addresses and device IDs directly from their account’s Fraud Console to curate their own account level blocklist. In this post, I’ll explain the different levels and views we use to mitigate fraud in real time and how to enable the Blocklist.

Blocklisted sites

Three separate criteria can land a network’s site ID on the Global Fraud Blocklist: MTTI outliers, ad stacking and invalid install receipts. Each entity must surpass an established threshold to be considered fraudulent. The threshold required to land a site ID, IP address or device ID on the Blocklist is much higher than what is used for reports in the Fraud Console which flag statistical anomalies for marketers to investigate.

MTTI Outliers: With mean-time-to-install (MTTI), our Fraud Console will highlight any outliers that are 2.5 standard deviations from the network mean time for a given app. However, for the Blocklist we are more stringent. For a specific site ID to be blocklisted, we look at a rolling timeframe where the behavior was observed against multiple apps and exceeded a volume floor on the minimum number of installs reported for the outlier site. We only blocklist an additional deviation from the norm. Preload and self-attributing networks (SANs) are excluded from our algorithms.

The criteria for blocklisting sites is as follows:

  • Significant statistical outlier (more of an outlier than what’s reported in the Fraud Console)
  • Behavior must be observed on multiple apps
  • Rolling time window
  • Minimum volume of 50+ installs

An earlier post I wrote explored MTTI fraud in detail.

Ad stacking: As with MTTI, we’re more stringent with the blocklist than what we report in the Fraud Console. We set a minimum click threshold for stacked clicks. Anything beyond that threshold is blocklisted. In my previous post, I discussed ad stacking in detail.

Invalid install receipt: For installs originating from the iTunes or the Google Play Store, we receive a receipt that an installation occurred. In the cases when the App Store returns a non-verified install receipt, we deem the install fraudulent as reported by the site. Again, we set a minimum on the number of unverified receipts to warrant blocklisting a site ID.

Blocklisted IP addresses

We flag instances of anonymized IP addresses including proxies, VPNs and TOR exit nodes. These are sites purposefully trying to mask their traffic source.

Bad actors take steps to obscure their true IP address using proxies or VPNs (Virtual Private Networks) to circumvent geolocation restrictions; both are used in botnet traffic.

TOR or “The Onion Router” is a process by which web traffic is routed through a byzantine maze of encrypted relays with the purpose of anonymizing traffic. A TOR exit node is the gateway where encrypted traffic hits the internet. Legitimate traffic sources should not mask the sources of their traffic.

Blocklisted device IDs

Device IDs are placed on the Global Fraud Blocklist if they have an exorbitant click volume. When adding a device to the Blocklist because of click volume, it must surpass a threshold of clicks within a 24-hour period.

Not all devices with high click volumes are automatically blocklisted. A device may be reported on an individual app’s fraud report from Kochava but not blocklisted. For more information, read my previous post about devices with high click volume.

Devices, where we’ve observed an invalid purchase receipt, are also added to the Global Fraud Blocklist. There are two primary methods for generating false receipts to spoof the verification from iTunes or the Google Play Store:

  1. A hijacked device with malicious code on it pretends to be the App Store
  2. “Man-in-the-middle” attacks where the malicious code sits between the device and the App Store

In both instances, a false receipt is generated and sent back to the device. The app accepts the receipt as a legitimate transaction, but there is no record of the transaction from the respective App Store.

Enabling the Global Fraud Blocklist

Marketers can begin using the Global Fraud Blocklist by contacting their Client Success Manager for the service. From that point on, marketers are in control of how they use the Blocklist.

There are two ways to enable the list:

  1. Navigate to Fraud Console (under Account Options) select to apply the Blocklist to your entire account or to specific apps in the account
  2. At the tracker level (under Campaign Manager and Traffic Verification), you can select to apply the list by IP, site, device ID or all three in addition to other criteria to verify the traffic delivered by networks

Marketers can also customize their Blocklist by adding their own list of fraudulent device IDs, site IDs and IP addresses they’ve encountered to the list in the Fraud Console.

With the Fraud Console, marketers have a powerful suite of preventative tools to eliminate fraudulent activity from their traffic. Because fraud is evident in most app traffic, employing the real-time Global Fraud Blocklist is a necessary step to protect ad spend and run effective campaigns with legitimate impressions, clicks, installs and post-install events.

In case you missed them, read also Parts 1, 2, 3 and 4 of the Fraud Abatement Series.

About the Author

Grant Simmons is the Director of Client Analytics at Kochava and leads the team in analyzing campaign performance and business value assessments. He is the former head of Retail Analytics at Oracle Data Cloud where he worked with over 1,500 retail directors, VPs, CMOs and agencies to develop individualized test-and-learn strategies.

For more information about the Kochava Fraud Console, Contact Us.

The post Fraud Abatement Series #5—The Kochava Global Fraud Blocklist appeared first on Kochava.

]]>
Fraud Abatement Series #4—Ad Stacking https://www.kochava.com/blog/ad-stacking/ Wed, 29 Mar 2017 22:47:42 +0000 https://www.kochava.com/?p=8569 The post Fraud Abatement Series #4—Ad Stacking appeared first on Kochava.

]]>

Ad stacking is a fraud technique where multiple ads are layered on top of each other in a single ad placement. While only the top ad is visible, if a user clicks on the visible ad, a click is registered for all ads in the stack.

Stacked ads with hidden ads behind one visible ad

Kochava is unique in the ecosystem because we see billions of ad impressions for a variety of ad campaigns across thousands of publishers, networks and exchanges, all in real time. Our fraud algorithms detect and report cases where multiple clicks are registered at the exact same date timestamp for a given ad placement. The following table and graph detail this behavior for a gaming app during the month of January:

 

Smartphone with multiple hidden ads stacked behind a visible ad

 

When we look specifically at the site IDs, we see that there is one main offender (Site_OEP) with 698,000 ads that shared the same date timestamp, and there are 164,000 instances of stacking (at an average of 4.3 clicks for different apps per ad unit). Overall, there were relatively few installs (97).

What’s going on here?

There are several reasons fraudsters employ the ad stacking tactic:

  • Click stuffing: Once the user clicks on the visible ad, the click is registered for all the ads stacked behind it. If the fraudster is strategic, they may stack ads for similar apps, meaning apps the user is likely to install in the future. If they can register a click for what the user may eventually install, they stand a chance of receiving attribution for it. This type of fraud focuses on scamming the marketer by gaming attribution.
  • Impression stuffing: I wrote earlier about high click-to-install rates. If an impression is sent to a click endpoint, it is registered as a click even though a user never clicked. The scam is twofold: First, no actual click took place; and second, the scam sends multiple “clicks” en masse against multiple apps with the intent of 1.) Gaming attribution (scamming the marketer) and 2.) Delivering bogus impressions (scamming the network).
  • Viewability fraud: In addition, if impressions are stacked, all the impressions stacked within the ad container may be reported as “viewed.” In this instance, fraudsters are scamming viewability metrics as well.

Prevent Ad Stacking with the Fraud Console

Luckily, ad stacking is a relatively easy fraudulent tactic to catch and is one of the fundamental fraud behaviors we surface in our Fraud Console. The Fraud Console surfaces 11 indicators of fraud. Any site ID, device ID, or IP address flagged in the advertiser’s Fraud Console can easily be added to their Account Blocklist to abate fraud in real-time.

In addition, sites that execute this tactic across multiple marketers are automatically added to the Global Fraud Blocklist. The list is dynamically updated with new entities added if they regularly participate in fraudulent activity. Networks and publishers may work to remove themselves from the list by demonstrating aggressive anti-fraud processes and solutions.

In my next post, we’ll explore the Global Fraud Blocklist and how to make it work for you.

In case you missed them, read also Parts 1, 2, 3 and 5 of the Fraud Abatement Series.

About the Author

Grant Simmons is the Director of Client Analytics at Kochava and leads the team in analyzing campaign performance and business value assessments. He is the former head of Retail Analytics at Oracle Data Cloud where he worked with over 1,500 retail directors, VPs, CMOs and agencies to develop individualized test-and-learn strategies.

For more information about the Kochava Fraud Console, Contact Us.

The post Fraud Abatement Series #4—Ad Stacking appeared first on Kochava.

]]>