By Jason Smith, Chief Digital Officer, Data & Commerce Practices at Publicis Groupe, and I-COM Global Member
As marketers, it’s no secret that our best laid strategies and plans are subject to variability. Even beyond the types of variability we might expect at some stage: unforeseen consumer shifts, competitive disruption, the rigors of marketplace dynamics – there’s real noise in the more human factors. When a decision about the best tact or course is needed, these might include our own professional environment at the time, our mood, or our beliefs and values. At a time when AI has captured our collective imagination – look no further than ChatGPT chatter – it’s worth noting its role in the muscular work of decision making and its capacity quite simply for cutting through noise.
So what does a more purposeful use of AI to help reduce noise involve? It involves using algorithms and AI to augment and in the right and reasonable instances, replace human decision making.
A working committee at I-COM Global recently took a closer look at what some might consider a balancing act – the practice of operationalizing around this, gauging when to leverage AI and when to tap the human factor. Reflecting on our work at I-COM as part of this year’s Data Science Journal, I’d like to share some of our findings here in this setting as well, as food for thought within the broader industry conversation.
Marketers and advertisers tackle seemingly countless decisions across the end-to-end media process of discovery and gathering of consumer insights, planning and buying, creative development, activation, testing, learning, optimization – onward for measurement and analysis. Forecasting and prediction come with the territory. When we think about how best to operationalize the use of AI and then appropriately balance its use with the human factor, there are a number of things to better understand as part of doing so.
Considering Bias Before and After Data Enters the Picture
Despite the trend toward data driven marketing, most marketers are not inherently strong at data analysis. Let’s leave aside the data for a moment entirely and just look at the act of making a decision. Whether inclined to acknowledge it or not, most decision making is based more often than not on emotions. This in turn gives rise to noise in our decision-making system, no matter how ostensibly formal. Add to this the fact that no human’s emotions are the same every day, well, the issue is obvious. Human decision making is already impacted by emotional noise and worsened by the fact that our emotions vary.
Further along in the process, when conclusions have been reached, there’s also the matter of post-rationale. The brain is constantly looking at what’s happened and trying to predict the go-forward. Even if you start introducing data at this point, there is the risk of confirmation bias. Bias is always at play within the human factor, and you’re only one member of the team. Imagine the following hypothetical example.
There’s a conversation between the head of marketing and the head of ecommerce for an up-and-coming CPG brand. There’s some debate over how people find the brand and begin the path to purchase – does it originate with organic search, social media, a PPC ad. The banter back and forth over organic vs. paid media engagement involves sparring versions of, “Well, I just don’t believe people do that.” Or, “I never do that. It’s not realistic to think that your average consumer does that.” So there is ultimately no data driven decision making whatsoever. Gut reaction meets gut reaction, and a customer segment gets developed with no real analysis of what the customer segment looks like from a data perspective. Despite the pretense of a logical exchange, there is emotion – also, not for nothing, including the relationship dynamics between the two teammates — and variability at play.
The Difference Between Quantifiable Competitive and Market Factors and “Noise”
Within your competitive set, where you sit on the data driven marketing continuum will be a differentiator. If you are not particularly data driven and one of your competitors is wise enough to use data to try to eliminate noise, the reality is they’ve got the advantage. The example we gave in our chapter of the I-COM Data Science Journal was around selecting your audience. If you are selecting your audience based on your human decision making alone, you may well miss entire segments of your target audience. Now, if your competitor is utilizing an augmentation between humans and AI and allowing a more data driven decision making process, that puts them potentially ahead.
AI Can Help But It Starts with Human Self Awareness
It’s worth noting that unfortunately, most humans and even marketing decision makers do not see the flaws in their own decision making at all. They’ll quite happily point to an AI and say, “Well, it’s not right, it’s not perfect,” and discard the analysis, failing to acknowledge the imperfections in our own decision making. This never gets remedied and the cycle continues.
As a data driven marketer – who is also human – with human teams on the job — you have to recognize the inconsistencies and flaws in human decision making and then think, “OK, how can I build AI & data into my decision making process”? It’s critical to think about the process that the business has for making decisions. That is, quite literally how decisions are made within the business, in the first place, as you work to find the most productive cooperation between humans, AI and data driven marketing. Self-awareness and procedural cross-checks are useful here. You don’t have to be a neuroscientist to do the work here. It starts with accepting that no matter how long you’ve been doing the job, and how unimaginable it may be that an algorithm has something to lend, a combination is the key.
To be able to do this properly, you’ve got to start thinking about bringing your data together. Have you got the right data? What’s the quality of that data like? If you’ve got gaps in the data, how might you fill those gaps? Because if you don’t start thinking about this, (which is an awful lot of work from an organizational perspective), then you’re not really going to get the best results out of the AI. It’s largely, at the moment, only as good as the data that’s been fed into it. So you will end up with something very poor if you don’t sort out those data issues. The key will be nurturing a management and decisioning culture that sees the potential of using AI to reduce noise and bias to ultimately achieve a stronger combined model.