Friends,
I'm T-minus 2 months away from a 5 week euro adventure! And I’m using that deadline as motivation to close out my goals for the year.
Because let’s be real, I don’t think I’ll be getting much done between late November and Christmas. Plus there’s a friends wedding in the middle of it all, so realistically, 2025 ends for me (in a professional sense) on October, 10th.
Oof. That’s not long away.
In other news, GPT-5 came out last week, and I’m not going to lie, it caused me conflicting emotions.
On the one hand, it came out while I was taking a break, immediately flaring my ‘always on’ AI-FOMO. But then, a breath of relief, when I read wall-to-wall posts about how bad it is.
I guess we all get to keep our jobs a little longer then, which is kinda nice.
Which is a nice segue to today’s interview, on the role AI can play in compensation.
Enjoy,
Matt
Know a Head of People handling startup compensation 🙋 why not forward this to them for some instant karma?
Did someone forward this to you? Make sure to hit subscribe and save them the click next time!
Will AI Fix Compensation, or Just Make It Worse, Faster?
with Culture Amps former VP of People Ops, Aubrey Blanche
Aubrey Blanche is the founder of MathPath and former VP of People Ops at Culture Amp, where she led an industry‑leading effort to build pay equity and earned recognition for achieving one of the smallest gender pay gaps among peer tech firms.
With expertise at the intersection of people, ethics and responsible AI, Aubrey operationalises equity into business systems — transforming abstract values into measurable outcomes. Aubrey’s blunt, data‑driven approach underpinned Culture Amp’s claim: “We fucking won pay transparency”.
Here’s what we covered:
Make AI your always‑on pay‑equity analyst
Define “fair” in code: governance before models
Turn biased training data into errors to correct
Build AI literacy without losing human judgement
Use candidate‑facing bots to explain your comp philosophy
Personalise rewards while preserving ‘audit-ability’
Automate approvals and offers, not relationships
Prevent hallucinations with review and choice mechanisms
My Key Takeaways:
Responsible AI starts with a definition of “good,” not with a model.
Aubrey argues the biggest risk is delegating ethics to the algorithm. Before tools, set a responsible‑AI framework that defines acceptable outcomes and constraints for comp decisions, then align processes and approvals to it. “You need to intentionally build for good stuff if you want it to do good things in the world.”
AI can be a real‑time pay‑equity analyst.
Her “dream world” is a comp BI layer that flags gaps as they emerge: expected pay vs. actual, variance by cohort, and recommended fixes. Deterministic models do the measurement; generative tools propose causes and actions. The goal is pragmatic: “Hey, something’s not fair… I’m able to crunch the data and give you that in real time.”
Datasets have agendas. Make yours explicit.
Quoting mentor Kieran Snyder, “every data set has an agenda”. If historic data reflects bias, unguarded models will learn bias as “truth.” Aubrey’s move: encode those patterns as problems, not targets. Treat unexplained pay gaps as errors to correct rather than trends to imitate; penalise status‑quo replication in model evaluation.
Machines optimise for patterns, not morals.
“Computers don’t really have morals… ‘correct’ simply means reflects the status quo.” Aubrey cites Amazon’s shelved résumé screener that learned to down‑rank women because men had been hired more often. The comp corollary: without guardrails, AI will reproduce legacy inequities, just with prettier charts.
Design for scepticism: review, optionality, and documented overrides.
Large models “hallucinate”, in Aubrey’s words, “the model made shit up”. Counter this by forcing human review before acceptance and giving decision‑makers the ability to edit outputs. That preserves critical thinking and, research shows, boosts adoption because people buy into choices they can change.
AI literacy is a core competency for People teams.
Don’t require analysts to hand‑compute regressions; do require them to understand what the model is doing and why. She channels the calculator debate: learn the logic, then use the tool. The irreplaceable human edge isn’t empathy alone, it’s judgement and curation of AI outputs in messy, high‑stakes contexts.
Bring your comp philosophy to life with candidate‑facing chat.
Aubrey imagines a RAG‑powered bot trained on your philosophy, market targets, and range rules. Candidates could ask: “Can I maximise salary here?” The bot replies with specifics — “we target P75 at midpoint; we’re unlikely to be top‑of‑market for this role; here’s how total reward stays competitive” — helping misaligned candidates opt out early and saving recruiter cycles.
Personalised total reward is feasible, and auditable.
Aubrey’s worked with firms letting employees choose the cash‑equity mix based on current vs. future value preferences. AI can model trade‑offs, maintain fairness constraints, and keep pristine records so someone swapping cash for equity doesn’t later double‑dip. Benefits can follow the same logic with a “wallet + menu” approach that maximises perceived value per dollar.
Automate rote tasks; elevate human conversations.
Expect AI to wipe out spreadsheet‑shuffling: offer prep, approvals, and policy checks done in “a couple of clicks”. The jobs don’t vanish; the job design does. The most competitive practitioners will pair high EQ with data fluency to steer strategy, socialise trade‑offs, and hold the line on fairness.
Beware AI used as a low‑ball engine.
Aubrey is blunt that the same tooling can be pointed the wrong way, and that “calculate the lowest amount they’ll accept” is just a prompt away. Counter this by hard‑coding floor rules, documenting consent when employees pick trade‑offs, and routinely auditing outcomes by cohort so efficiency never trumps equity.
Where to Find Aubrey
LinkedIn: Aubrey Blanche
Got a specific topic you want me to cover or a guest you’d love to nominate? Hit reply to this email and let me know.
That’s all for this week.
Sure, this is technically the end of the newsletter, but we don’t have to end here! I’d love this to be a two-way chat, so let me know what you found helpful, any successes you’re seeing, or any questions you have for me.
Sharing is caring. If you are enjoying our newsletter, others might too — forward this on to them and get their endless appreciation.



