The Marketing < > Analytics Intersect, by Avinash Kaushik
TMAI #495: Analyst 2028: S.H.I.F.T For Relevance.

[ Web Version ]
Today an Analyst spends a majority of their time doing, with only a little left for thinking.

Reports. Dashboards. Backwards looking drilldowns when a leader asks a question.

In January 2028, an Analyst will spend a majority of their time thinking, with all the doing managed by their swarm of agents.

Big. Change.

The inevitable result of the new 10/90 rule: If you have $100 to invest in smarter analytics, invest $10 in people and $90 in intelligence & automation. 

In fact, over time that needed investment will drop to $80, and then $70 and so on. I am anticipating both intelligence and automation to become exponentially better… while becoming cheaper!

Punchline: Our jobs today will change at an accelerating pace over the next 12 months, and cease to exist in their current form in 18–24 months.

That is scary when you realize that in the past 20 years the Analyst role has only changed slightly, in nearly all domains in nearly all companies.

Let’s. Get. Ready!

[URGENT Request: While written for the Analyst role, the thinking I’m applying below applies to your job. CEO. Marketer. HR Director. CRM Builder. Every job will undergo immense change. Some, eliminated. Please reflect, adapt, embrace, evolve.]

Let’s leave the data puking life behind. Let’s embrace this extraordinary opportunity to elevate our impact within an organization by solving previously intractable problems.   
The fallibility of human predictions.

It is said that humans overestimate change in the short-term, and underestimate it in the long-term. 

With urgency, I’m going to lay out a specific set of recommendations for what I see based on my best pragmatic assessment. There is a possibility I am overestimating change in the short-term (24 months) and underestimating change in the long-term (infinity).

Here’s the critical bit: It is a matter of when, not if.   
Analyst 2028: New Job: Decision Intelligence Onwer.

As the world hurtles towards AGI*, the production of analysis is heading towards zero marginal cost. AI Agents will monitor every metric, detect every anomaly, and generate every report instantly. 

[* A type of AI that can perform any intellectual task that a human can do. It possesses the ability to learn and apply information across multiple, unrelated domains without task-specific reprogramming.]

For nearly all tactical decisions, reports won’t be necessary as actions will be automated across integrated systems. For nearly all strategic decisions, causal analysis, recommended actions, and predicted business impact will be produced automatically - for the Analyst to add missing context, unpack any ambiguity.

A helpful metaphor for an Analyst in Jan 2028 is that she/he is no longer a player on the field kicking the ball, they are the coach and the referee.

This job title captures the essence: Decision Intelligence Owner. [DIO]

To own the quality, safety, and business alignment of automated decisions.

Here’s a framework that captures the evolution, from today to Jan 2028: SHIFT.

Get it. Shift. 😊 

S. Strategy Translator.

AI will optimize for any metric you tell it to. [Remember AI the paperclip problem?]

We have to identify a North Star to ensure your company is not getting brilliant solutions to the wrong problems.

Jan 2028 Responsibility: Reward Function Architect.

Translate ambiguous, messy human business goals [ex: we need to look cool to Gen Alpha] into mathematical parameters that AI can execute on. We need to be more premium = Increase average order value by 15%, while maintaining NPS above 50, prioritizing customers with LTV over $5,000.

As the Reward Function Architect you are the bridge between a leader’s intent and the algorithm’s execution.

Actions Starting Today.

Start obsessing about business models that power your company. If you don’t understand business strategy, you will fail as a Reward Function Architect.

Learn how your company actually makes money – unit economics, margins, competitive pressures. Identify how Marketing’s actually contributing to financial success – balance sheet level. 

Do this exercise for your next dashboard, write: ā€œThis dashboard will inform [decision] by helping us choose between [option A] and [option B].ā€  
H. Human Dynamics Engineer.

For now, AI is literal. Dangerously so. Ex: It will see discount emails drive sales, so it recommends more discount emails. It won’t see the erosion of long-term brand equity that will occur by training customers to wait for a sale.

Jan 2028 Responsibility: New Discoveries Driver.

Focus on identifying the business known unknowns for the AI, and what might be the unknown unknowns to the AI (but known to you, the human). 

Imagine radical experiments that historical data says shouldn’t work, but human intuition, present circumstances, elements on the upcoming horizon, say might. Ex: New customer intent clusters. New experience archetypes. Exploration budgets that protect brand and margin – thus short and long, fuzzy and concrete, things still potentially challenging for AI in early 2028.

My friend Matthew Tod brilliantly frames it as #NITS: Not in the training set. Things that machines don’t know. I love it. Business processes, culture, persuasion, governance, change being the only constant, etc. etc. etc. Embrace Matthew’s hashtag.

Actions Starting Today.

How comfortable are you with creativity and psychology? Read behavioral economics. Study design thinking. Playing off the sub-title of my second book, Analytics 2.0… The art of analytics will become more valuable than the science – because AI will own science.

[Recommendation: In TMAI #208, emphasizing the value of understanding human psychology, I’d recommended: Misbehaving by Richard Thaler, Influence by Robert Cialdini, The Undoing Project by Michael Lewis, and Fooled by Randomness by Nassim Taleb.]
I. Intelligence Orchestrator.

Collecting, cleaning, organizing data will remain of immense value, and stewarded by a dedicated technical data role. An Analyst will not be writing SQL queries or logging into GA or manually creating segments (see the last two TMAIs for why all this is already gone). 

An Analyst will manage a swarm (or, fleet if you like that better) of specialized AI Agents. One for monitoring inventory, another for monitoring UX, a third to optimize the Paid media campaign based on causal incremental impact.

Jan 2028 Responsibility: Agent Wrangler.

You manage autonomous or semi-autonomous AI agents (with immense influence on, if not total control of, the Reward Function – see ā€œSā€ above).

Your Media Buying Agent is spending hundreds of thousands (/millions) to acquire customers because it has found a monetizable high conversion intent. But, your Supply Chain Agent shows very low inventory. Connected together, they ensure the company’s money is not lit on fire. Done intelligently, informed by Agent 2, Agent 1 redirects intent identification to products in inventory, and collaborate with Agent 3 to acquire high LTV customers. BOOM! You did all that. Well. They did all that. You get the credit. 😊

Actions Starting Today.

Shift from learning tools to learning systems. Today, you likely narrowly focus on one of the three agentic behaviors today. Best case, you optimize a silo. Tomorrow, you optimize portfolios (of three in the simple scenario above). This is a hard, complex, shift.

Study API architecture, data pipelines, and how different tech stack components integrate. Invest in learning prompt engineering for Agentic Workflows – how to chain prompts to execute complex tasks. Learn to map dependencies between systems, and the logic that powers the optimal. Create system conflict scenarios, and how they might get resolved (technical, business).

It is important to know all of the technical bits above so you can imagine possibilities. You won’t have to do all the technical work. Ex: Five years ago I had to know APIs, keys, columns, connections, logic to build my complex workflows in Zapier. Today, I simply use Zapier’s copilot interface and tell it in English what I want done…. And it does all the coding! But. Without knowledge of the technical bits... I can't imagine all possibilities at my disposal to achieve the complex outcomes across disparate systems.
F. Forensic Auditor.

I’m sure you’ve noticed hallucinations have dropped to a new precipitous low in LLMs. But. They are there. In complex systems reward systems can get confused. We also don’t really know how ā€œAIā€ works – and they will become even more opaque (if they reach Vantablack status as far as we humans are concerned).

Jan 2028 Responsibility: Risk Guardian.

You do not create the report, you grade the report. You will assess the quality and quantity of outcomes. You will check for bias, violations of privacy laws, following/breaking of the governance mechanisms the business, and/or the government, has put in place. Your job won’t be to check AI’s ā€œmath,ā€ your job is to ask: What would have to be true for this to be right? Then assess if that’s plausible (such a deeply complex and human task!)

Actions Starting Today.

Invest in learning about AI Ethics, data privacy, governance principles and frameworks. Practice the ā€œtherefore testā€: For every AI insight today, ask ā€œTherefore we should _______.ā€ If the action seems off (/crazy), the insight might be wrong. Identify predictions being made today in your team that failed. Ex: Forecasted revenue by month. Analyze why you got it wrong. Was it bad data? The assumptions were wrong? What would have been helpful to have (data, logic, strategy, tactical details) that would have improved the quality of your prediction? You are preparing for your future role!
T. Translational Storyteller.

Weird word, translational. Stay with me.

This is true today: Our brilliant insights are often worthless because key leaders don’t understand it or they don’t trust it or they don’t trust us or the incentives motivating them are invisible to us. 

It will be true in the future. AI can generate brilliant IAbI (which you and I struggle with today!). Your job will be to generate belief and action.

Jan 2028 Responsibility: Narrative Architect.

Insight: The gradient boosted model shows a 73% probability of churn for Segment gamma. Your job will be to convert it into a story: Our most loyal customers are getting frustrated. They are three times more likely to leave in the next month. Here’s the exact experience that’s breaking their trust, and here’s how we fix it before Tuesday. Belief. Action. See?

Actions Starting Today.

I think you get why most Analysts are not great storytellers. Time to evolve. Try simple things. Rewrite a technical finding as a tweet. 280 characters. Start with the punchline, the next time you are in front of your C-Suite – put the recommendation on slide one, the data in the appendix, generate belief and action. 

My storytelling framework for data is Care-Do-Impact. Build your own. A bunch of people use the one minute story format: Situation > Complication > Question > Answer > Action. Try that on for size. Change it to wrap around your unique strengths and weaknesses. 

This one’s big: Understand incentives. All humans respond to incentives, and nearly all the time you don’t know what they are. Understand them, and you’ll improve generate belief and action by 100x. 
S. H. I. F. T.

Every single day after your first coffee/Red Bull at work… Pause. Clear your mind. Give yourself this prompt: Am I doing work that will be obviously better when done by AI in two years?

If yes. Stop. 

Work yourself out of that responsibility by documenting it, automating it.

You and I are not going to compete with machines on their turf. 

You and I are going to do what only humans can do: Understand nuance, ask surprising questions, tell compelling stories, and make ethical judgements.

The SHIFT framework is the prescription.

Start today.

Start by automating one report (and decision cycle, if you can). Use the time saved to have one strategic conversation.

Hello, Jan 2028.
Bottom line.

Over the next six months (July 2026), work towards ā€œfiring yourselfā€ from your current job. Activate all the recommendations from the last two TMAIs.

Next, through Jan 2027, learn the business, not the tools – be a strategy apprentice.

Through July 2027, shift from using AI to managing AI.

From Aug 2027 to Jan 2028, complete your transformation by shaping strategy through algorithmic insight and human judgement.

More joy. More job satisfaction. More influence. More salary. 

Who’s afraid of AI? Not you!

Carpe diem.

-Avinash.

PS: There is so much more complexity and detail to unpack. In future TMAI Premium editions, I’ll continue to build on the SHIFT framework to contribute to guiding your career’s evolution. We will both grow together.  
Thank you for being a TMAI Premium subscriber - and helping raise money for charity.

Your Premium subscription covers one person. It's fine to forward occasionally. Please do not forward or pipe it into Slack for the whole company. We have group plans, just email me.

[Subscribe]  | [Web Version]  | [Unsubscribe]
Ā©2022 ZQ Insights  |  PO Box 10193, San Jose, CA 95157, USA