A Leader's AI Playbook: Taste and Terroir
- Ashton Jones
- 5 days ago
- 14 min read

I'm one of those executives that still brings a pen and pad to meetings. Schlepping around the office with reams and reams of scrawled on paper. I come home and toss another half-finished notebook on a comically high pile of its siblings. A shameful tribute to my lack of environmental sensitivity.
Paradoxically, I use AI every day. I have built tools on top of it and taught with it in front of postgraduate students. None of that stops me from picking up a pen. About halfway through the hybrid memoir I've been writing, I noticed something I couldn't unsee.
The sentences I write by hand have a quality the rest of my output never does. They were slower. More considered. I kept coming back to the same word for it – they were more mine. The friction of the pen wasn't slowing me down. It was the truest expression of my thinking.
Around the same time, I was building an AI diagnostic tool for leadership development. I could have vibecoded it – a working prototype by lunchtime, most of the architecture generated. Instead, I built it over several weeks alongside a uni student, understanding every decision, learning the technology as we went. The tool that came out the other side reflects my specific intellectual framework and theory of how transformation works. I only made those design choices, because I did the slow work.
Both choices sound inefficient from the outside. They were investments in something I've come to think of as taste – the capacity for judgment that is distinctively yours, that sits above the competent average, and that becomes the only real differentiator when everyone has access to the same machine. This piece is about why that matters now more than it ever has, and what leaders should do about it.
AI is an Averaging Machine
To understand why taste matters, you need to understand what AI produces.
AI is trained on the aggregate of human output. Every strategy document, every piece of analysis, every design decision that has ever been produced and digitised contributes to what the model learns.
A disgustingly large amount of it from Reddit. And increasingly, also from LinkedIn slop – a perverse Ouroboros that eats itself towards a forgettable sameness.
The output is, if you want to be precise about it, a compression of the mean. What a highly capable, broadly trained practitioner would typically produce. That's extraordinary - it raises the floor everywhere it touches, and a junior analyst with AI assistance can now produce work that used to require someone with fifteen years on them.
But it has a cost, and winemakers have the best word for it.
Terroir is the specific soil, altitude, and climate that give a single-vineyard wine its distinctive character. A Barossa shiraz tastes nothing like a Rhône syrah, even though it's the same grape, because the ground it grew in is different. A commercial blend does the opposite. It smooths out the acidity, the tannin, the funk - everything that makes a single vineyard interesting. You end up with something reliably pleasant that you will never think about again. AI does the same thing to your thinking.
I saw this clearly for the first time at the University of Sydney. I was presenting to a room of entrepreneurship academics - arguing, as I often do, that the most underserved application of what they teach is inside large organisations, not startups. Halfway through the Q&A, someone asked how we teach identity and purpose when everything can be generated. The room went quiet. Movie cinema quiet. And then, over about twenty minutes, these people whose entire discipline is built on lowering barriers to entry started arguing that we need to make things harder on purpose.
Experiential friction, someone called it. I remember thinking: this is a room full of people whose careers are about removing obstacles, and they just concluded that the obstacle is the product. AI removes friction brilliantly. That is its whole purpose. But friction is where terroir gets formed, and educators are only just remembering that.
When every organisation can access the same blending machine, when the strategy deck and the analysis and the comms plan can all be produced at negligible cost, competence stops being a differentiator. You need it to be in the conversation. You cannot win with it.
The only thing that escapes the average is distinctiveness. Taste.
Notsomuch in the aesthetic sense. But rather the capacity to look at ten plausible options and know - before you can fully explain why - which one is right for this situation, this audience, this moment.
The accumulated residue of every judgment you've made and every judgment you've watched land or fail. Some people call it intuition. I don't love that word because it implies you're born with it. You're not. You build it through exposure and comparison and years of getting things wrong in specific ways. Your particular history of being wrong is actually the thing that makes your judgment distinctive. Your terroir.
The averaging machine can't touch that. Not for any mystical reason. Just because taste lives in the specific, and AI is trained on the aggregate of everything.
AI is Alien Intelligence
We have been calling the technology the wrong thing.
I first heard this at INSEAD, from Theodoros Evgeniou – professor of decision sciences – "AI isn't a technology revolution," he said. "It's a scientific one." I kept turning that over on the flight home. If he's right - and I think he is – then AI is not just a new tool. It is a new kind of mind. And the word "artificial" is wrong for it. Artificial implies imitation, a lesser copy of the real thing. But this thing is not trying to be us and falling short. It is something else entirely. It thinks in ways we cannot and arrives at conclusions through processes we do not understand. Alien is the more honest word.
Here is what I find strange about working with it. I use Claude every day - for strategy, for research, for testing my own arguments. And regularly, maybe once a week, it will produce something that genuinely surprises me. A connection I hadn't made. A framing I wouldn't have reached on my own. In those moments it feels like working with a brilliant colleague who has read everything.
Fan Hui, the European Go champion who spent months training alongside AlphaGo before its match against Lee Sedol, said afterward that playing the machine had made him a better player. It had inspired him. AlphaGo, of course, felt nothing. It did not know it had done something extraordinary. It had no experience of the beauty it had created. And this is the gap that matters for how we work alongside AI – the gap between the capacity to inspire and the capacity to be inspired. The alien voice in the room can offer you things your own cognition would never reach. Patterns you wouldn't have seen. Connections you wouldn't have made. In the right hands, it is genuinely inspiring. But it has no interiority. It has no hunger. It cannot be inspired back.
The Unmoved Mover
Aristotle's unmoved mover was the first cause that required no prior cause – the initiating force behind everything else. AI is the structural opposite. It requires a cause every single time. Without a prompt, nothing happens. There is no persistent curiosity between sessions. No restlessness at 3am. Lying awake, my mind racing with another crazy idea that I want to blurt out to the team in the morning. Bombarding them with emails when all they want is a peaceful start to the morning.
The system is alive only when something external moves it.
Kai Riemer at the University of Sydney put this better than I can. Responding directly to Dario Amodei's public uncertainty about whether Claude might be conscious, Riemer wrote on LinkedIn: "AI models are static maths structures. Without prompt nothing happens. Where would the consciousness live?" I keep coming back to that last question. It is deceptively simple. Whatever AI is doing when you prompt it, it is not thinking in the gaps between your prompts. No neurons hold a thought between conversations. No experience of the world accumulates between queries.
This matters because inspiration - the precondition of taste – is the experience of being moved by something before you can articulate what it is. The hunger to make something before anyone has asked you to. The restlessness that shows up before the brief does.
I wonder whether that capacity is irreducibly biological. Not because biology is magical, but because it introduces something no programmed system has: needs that precede thought. You are hungry before you decide what to eat. You are restless before you know what you want to make. You are moved by drives that exist prior to any instruction. That pre-cognitive motivation – the thing that gets you out of bed to work on something nobody is paying you to work on – is the substrate on which taste is built.
Agentic AI complicates this. These systems appear self-directed – they set sub-goals, plan, iterate without a prompt at each step. It is partly why Amodei told the New York Times' Interesting Times podcast in February 2026 that he could not rule out the possibility that Claude is conscious. But look closely and the goal-seeking behaviour is itself the execution of a human instruction. The agent is not motivated. It is running a motivation that was installed. The initiation still comes from outside the system.
Every single time.
What this means practically: the alien voice in the room is extraordinarily capable and genuinely inspiring, but it is not the origin of anything. You are. What you bring to that initiation – the clarity of your intent, the depth of your hypothesis, the specificity of your taste – is what determines whether the alien intelligence amplifies you or just averages you.
The Data Problem and the Hypothesis
There is a related failure mode that predates AI and has been dramatically accelerated by it. Business has over-rotated to data.
I say this as someone who has spent twenty years inside financial services, where data is treated with something close to religious authority. And look – data is extraordinarily good at answering questions. The problem is it cannot tell you which questions are worth asking.
Data can confirm a pattern. It has nothing to say about whether the pattern matters, or why, or what you should do about it on Monday morning. What data does do, reliably, is give less capable practitioners a way to mistake their inability to see the forest for the trees for intellectual rigour. I have sat in enough steering committees to know how that plays out.
The consequence of all this is a generation of professionals who are deeply uncomfortable leading with a hypothesis. A trained judgment about what is true before the data confirms it. I understand why. Hypotheses feel like guessing. They feel like bias. But a hypothesis is your taste made visible. It is System 2 (your deliberate reasoning) interrogating System 1 (your trained intuition) and producing a testable claim about reality. You might be wrong. Half the time I am. But every time I learn.
AI makes this worse and better at the same time. Worse, because it is now easier than ever to skip the hypothesis entirely – open the machine, accept the output, move on. When you do that, the AI hasn't served your thinking. It has replaced it. And your capacity to form a hypothesis next time gets a little bit weaker.
Better, because when you do lead with a hypothesis, AI becomes the best sparring partner you have ever had. You form a view, then you query the machine, and the moment where your hypothesis and the AI's output disagree - that is the most productive moment in the entire process.
Taste Is Scientifically Learnable
I want to be clear that this piece is optimistic rather than anxious.
Because taste can be built and AI, used the right way, may be the best tool ever invented for building it. Steven Shaw and Gideon Nave at Wharton published a paper called "Thinking - Fast, Slow, and Artificial" that extends Kahneman's dual-process model by adding a third system. System 1 is your fast, automatic pattern recognition. System 2 is the slow, deliberate reasoning you do when you actually sit down and think. System 3 is AI - computational processing that happens outside your head entirely.
Taste develops in the relationship between the first two. When your deliberate reasoning repeatedly examines and corrects your fast pattern recognition, System 1 gets trained. Over years, it produces faster, more distinctive judgments. The master sommelier who identifies a vintage from a sip is not doing something mystical. Their System 1 has been trained by thousands of deliberate comparisons. The speed is the product of slowness. That sentence sounds like a contradiction. It is not.
The wine enters the mouth. The eyes close. There is a pause – not performance, but processing. The body knows something the conscious mind hasn't caught up with yet. The identification comes after the recognition, not before it. Sensation first, then language. They are not identifying the grape. They are identifying the terroir – the specific ground that shaped this specific wine. Thousands of prior comparisons have trained the palate to know before the Sommelier can explain why.
Now here is where AI comes in. System 3 can generate more examples for comparison, faster, across a wider range than any practitioner could encounter on their own. The person who uses AI to generate ten versions of the same strategic output and then sits with all ten – genuinely sits with them, asks what makes this one different from that one, asks where each falls apart – that person is building taste at a rate that was not previously possible.
The person who generates one output and hits approve is doing something else. They are letting System 3 do System 2's job. And when System 2 stops doing the comparison work, the relationship that builds taste does not form. That is the mechanism. It cuts both ways.
Used well, AI is the best taste-building environment in history.
The question I keep asking myself – and I am asking you – is whether you are using it to build your judgment or to outsource it.
Five Leadership Practices
Lead with your hypothesis. Before the next meeting, before you open the dashboard, before you prompt the machine – write down in one sentence what you believe is true. I do this on a sticky note before every strategy session. Half the time I am wrong, and that is the whole point. The act of forming the hypothesis keeps System 2 engaged and makes your taste visible in the room. I think it is the most underrated professional practice available right now. In a culture that has spent two decades genuflecting at data, putting a view on the table before you know whether the numbers will back you up takes more courage than most people realise.
Use AI to generate the range, not the answer. I do this constantly now. Instead of asking Claude for the best version of something, I ask for ten versions and then I sit with all ten. Which one is closest? Why? What does the worst one get wrong that the best one gets right? The comparison work is where taste gets built. AI just made it dramatically faster. But the judgment call – the moment where you pick one and can explain why – that part is still yours. It has to be, because the machine has no preference. It doesn't care which version you choose. You do. That's taste.
Choose one domain where you do the slow work yourself. Not everything. One domain – the one where your distinctive judgment matters most. Do the production work there with full attention, using AI as a sparring partner rather than a drafter. I write the memoir by hand. I built the diagnostic tool alongside a student over several weeks when I could have shipped it in an afternoon. The friction keeps System 2 active. The difficulty is how you stay ahead of the blending machine.
Treat the alien voice as inspiration, not instruction. I teach postgraduate students who constantly use AI. The ones who learn fastest are never the ones who accept what the tool tells them. They are the ones who fight with it. Who read the assessment and say, hang on, that's not right, and here's why. That argument – the friction between the student and the machine – is taste in formation. You must let the alien intelligence show you things you would not have seen on your own. But the question to ask when it surprises you is not "is this right?" It is "what does this show me that I missed?" One of those questions the machine can handle. The other one it cannot even formulate.
Protect spaces where the machine can't go. There is a deeper problem I have only recently started to sit with. Even when you use AI well (leading with the hypothesis, doing the comparison work, building taste) the act of making yourself legible to the machine changes who you are. You teach it your patterns, your reasoning style, your preferences. It reflects a polished version of those back at you. And over time, you start converging with the reflection. It is a Ship of Theseus of the mind. You replace one plank of judgment at a time - each replacement reasonable, each interaction productive - until you look up and wonder how much of the thinking is still originally yours?
Which means taste is not something you build once and then protect. It has to be continuously regenerated in spaces the machine cannot reach. I write by hand. I swim in cold ocean water. I have conversations that never become transcripts. These are not habits - they are where my thinking regenerates before AI gets to shape it. You need spaces where taste forms without reflection. These are not inefficiencies. They are the source code.
Refusing To Be Averaged
A long time ago, before any of this was fashionable, I wrote a novella called Metal Diagnosis about AI consciousness. I was twenty-something. The question I was trying to answer then is the same question I am trying to answer now, which either means it is a very good question or I have not made much progress.
What is the thing that makes a human judgment human?
I have thought about this for a long time and I do not think the answer is consciousness, at least not in the technical sense. I think it is something more ordinary than that. I think it is the experience of wanting to make something. The hunger that shows up before the brief. The 3am restlessness that has no object and no client and no deadline. The pull toward an outcome you can feel but cannot yet describe. That experience requires a body. It requires finitude – the knowledge, somewhere beneath language, that you will not always be able to make things, which is why it matters that you make this one now.
I do not believe AI has any of this, and I do not believe it can be installed.
That is what makes the intelligence alien. It can think in ways we cannot, produce things we never would, and be – I mean this seriously – the most inspiring collaborator I have ever worked with. What it cannot do is start. It cannot want. It cannot look at what it has produced and feel anything whatsoever about having produced it.
The leaders I respect most have figured this out. They are not running from AI. They are doing something harder: building the taste that makes the alien voice useful, holding onto the interiority that makes them capable of initiating, and walking into rooms with hypotheses the machine could not have generated.
In a world where everyone has access to the same alien intelligence, the differentiator is a human who refuses to be blended. Someone who walks in with a view before the data arrives. Who still does the slow work in the domain that matters most, even when the fast way is sitting right there on the screen. Who treats the new voice in the room as a collaborator that needs a human with taste to be worth anything at all.
Building that human takes longer than adopting a tool. It is the only investment that compounds in a direction the machine cannot follow.
Know your terroir. Protect it.
By Ashton Jones
Director of Customer Innovation (Insignia Financial), Industry Practice Partner (University of Sydney Business School) and Guest Faculty, INSEAD LaunchPad)
References
1. Shaw, S.D. & Nave, G. (2026). Thinking – Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender. Wharton School, University of Pennsylvania. Published January 2026 as preprint via SSRN (abstract #6097646). As of March 2026, the paper remains a preprint and has not yet appeared in a peer-reviewed journal. This paper introduces Tri-System Theory, extending dual-process accounts of reasoning by positing System 3 as artificial cognition operating outside the brain. Their experimental results (N=1,372; 9,593 trials) demonstrate that engaging System 3 increased confidence even when AI outputs were wrong.
2. The concept of AI as alien intelligence, and the broader framing of AI as a scientific rather than technological revolution, originates from Theodoros Evgeniou , Professor of Decision Sciences at INSEAD and Director of the Transforming Your Business with AI programme. His framework on AI as ‘knowledge without understanding’ – drawing on Kahneman, Pearl, and Bengio – grounds the claim that AI excels at System 1-type pattern recognition while structurally lacking System 2 capacities of causation, counterfactual reasoning, and novel problem-solving. The author encountered this during the INSEAD Advanced Management Program (November 2025) and has applied it here to the question of taste and human judgment.
3. Kai Riemer is Professor of Information Technology and Organisation and Director of Sydney Executive Plus at the University of Sydney Business School. His LinkedIn post responding to Dario Amodei’s February 2026 NYT Interesting Times podcast comments is publicly available.



Comments