Avoiding algorithmic bias in programmatic display: An introduction

According to the Cambridge dictionary, algorithms are 'a set of mathematical instructions that must be followed in a fixed order, and that, especially if given to a computer, will help to calculate an answer to a mathematical problem'. In order to benefit from the efficiencies that algorithms can bring to marketing, it's important to avoid algorithmic bias, which occurs when a computer reflects the implicit values of the humans who are programming the algorithm, therefore influencing the outcome in some way.

How does algorithmic bias occur?

There are several potential biases that can occur. The way in which the data sources have been collected may have human bias, or the algorithm may have a set of hierarchies set by the programmer which could show bias. For example, in the context of a data management platform (DMP) lookalike audience, we build this off a seed audience. The data points that inform the seed audience are pre-defined by a human, and therefore the human input at this level can eventually influence the marketing campaign.

The very notion of an algorithm leads you to believe it's unbiased. But what happens when a 'set of rules' limits our ability to reach the right audience? How can we reduce algorithmic bias in order to encourage better performance?

We're in control

Let's stop blaming the 'black box' solution and start questioning it. Human input is what first and foremost creates the bias in the design. With regards to the programmatic buying space, here are three tips that can help planners and traders alike have a better grasp of how humans can influence outcome.

1.      Challenge the setup

Which data points are you using to direct a given algorithm? Do you know how this data has been collected? Are you satisfied with the confidence in the data?

2.      Test between different data sources that influence your algorithm

Whether it's a buying platform, data source, or algorithm itself, can you test between them? Are your findings similar? If not, how do your findings differ?

3.      Granular setup can help reduce bias

In instances where you have an algorithm that, for example, automatically bids to drive the lowest possible CPA, it's easy to become lazy. Having a granular setup can support performance-driven metrics. For example, if your algorithm is running wild, your only insight will be that it did or didn't work. However, if your setup includes market, geotargeting, device type, etc., then you'll have a much stronger idea of which market is performing best, as well as information on which geotargeting parameters and device can influence the next steps and ultimately improve performance. 

Don't be put off by the unknown. Instead, tackle what you do know to influence the performance of a given algorithm.

Filter posts

Powered by

Back to Top

© Copyright 2018 Greenlight. All Rights Reserved Terms & Conditions