
1
00:00:00,000 –> 00:00:03,400
Most organizations think their AI strategy is about adoption.
2
00:00:03,400 –> 00:00:06,080
Licenses, prompts may be a champion program.
3
00:00:06,080 –> 00:00:06,880
They are wrong.
4
00:00:06,880 –> 00:00:08,960
The failure mode is simpler and uglier.
5
00:00:08,960 –> 00:00:11,560
They’re outsourcing judgment to a probabilistic system
6
00:00:11,560 –> 00:00:13,440
and calling it productivity.
7
00:00:13,440 –> 00:00:15,000
Copilot isn’t a faster spreadsheet.
8
00:00:15,000 –> 00:00:17,120
It isn’t deterministic software you control
9
00:00:17,120 –> 00:00:18,520
with inputs and outputs.
10
00:00:18,520 –> 00:00:20,920
It’s a cognition engine that produces plausible text,
11
00:00:20,920 –> 00:00:23,760
plausible plans, plausible answers at scale.
12
00:00:23,760 –> 00:00:26,320
This episode is about the mindset shift,
13
00:00:26,320 –> 00:00:29,360
from tool adoption to cognitive collaboration.
14
00:00:29,360 –> 00:00:31,080
And the open loop is this.
15
00:00:31,080 –> 00:00:35,120
Outsource judgment scales confusion faster than capability.
16
00:00:35,120 –> 00:00:37,720
Tool metaphors fail because they assume determinism.
17
00:00:37,720 –> 00:00:39,480
Tool metaphors are comforting because they
18
00:00:39,480 –> 00:00:42,040
preserve the old contract between you and the system.
19
00:00:42,040 –> 00:00:42,760
You do a thing.
20
00:00:42,760 –> 00:00:43,640
The tool does the thing.
21
00:00:43,640 –> 00:00:45,440
If it fails, you can point to the broken part.
22
00:00:45,440 –> 00:00:47,920
That contract holds for most of the last 40 years
23
00:00:47,920 –> 00:00:50,800
of enterprise software because those systems are legible.
24
00:00:50,800 –> 00:00:53,000
Repeatable behavior, explainable failures,
25
00:00:53,000 –> 00:00:55,880
and accountability that stays attached to a human.
26
00:00:55,880 –> 00:00:59,320
Spreadsheets execute arithmetic, search retrieves references
27
00:00:59,320 –> 00:01:01,240
automation enforces bounded rules.
28
00:01:01,240 –> 00:01:04,040
Even when they fail, they fail in enumerated ways
29
00:01:04,040 –> 00:01:05,920
you can trace, audit, and govern.
30
00:01:05,920 –> 00:01:09,040
Now enter copilot and watch every one of those metaphors collapse.
31
00:01:09,040 –> 00:01:10,680
Copilot is not deterministic.
32
00:01:10,680 –> 00:01:13,960
It is not a formula engine, a retrieval system, or a rules engine.
33
00:01:13,960 –> 00:01:15,360
It is probabilistic synthesis.
34
00:01:15,360 –> 00:01:17,800
It takes context, partial signals, and patterns
35
00:01:17,800 –> 00:01:19,760
and produces output that looks like an answer.
36
00:01:19,760 –> 00:01:21,120
The output is often useful.
37
00:01:21,120 –> 00:01:21,960
Sometimes it’s wrong.
38
00:01:21,960 –> 00:01:24,200
Often it’s unprovable without doing actual work.
39
00:01:24,200 –> 00:01:26,360
And it is presented in the shape of certainty.
40
00:01:26,360 –> 00:01:28,080
Fluent, structured, confident.
41
00:01:28,080 –> 00:01:30,240
That answer, shape, text is the trap.
42
00:01:30,240 –> 00:01:32,480
Humans are trained to treat coherent language
43
00:01:32,480 –> 00:01:34,200
as evidence of understanding.
44
00:01:34,200 –> 00:01:35,680
But coherence is not correctness.
45
00:01:35,680 –> 00:01:36,680
Fluency is not truth.
46
00:01:36,680 –> 00:01:38,160
Confidence is not ownership.
47
00:01:38,160 –> 00:01:41,720
Copilot can generate a plan that sounds like your organization,
48
00:01:41,720 –> 00:01:44,960
even when it has no idea what your organization will accept,
49
00:01:44,960 –> 00:01:46,840
tolerate, or legally survive.
50
00:01:46,840 –> 00:01:50,240
So when leaders apply tool-era controls to a cognition system,
51
00:01:50,240 –> 00:01:51,360
they don’t get control.
52
00:01:51,360 –> 00:01:52,880
They get conditional chaos.
53
00:01:52,880 –> 00:01:56,040
They create prompt libraries to compensate for unclear strategy.
54
00:01:56,040 –> 00:01:58,360
They add guardrails that assume output is a thing
55
00:01:58,360 –> 00:02:00,000
they can constrain like a macro.
56
00:02:00,000 –> 00:02:03,320
They measure adoption, like it’s a new email client rollout.
57
00:02:03,320 –> 00:02:05,800
They treat hallucination as the primary risk,
58
00:02:05,800 –> 00:02:06,920
and they miss the real risk.
59
00:02:06,920 –> 00:02:09,240
The organization starts accepting plausible output
60
00:02:09,240 –> 00:02:10,480
as substitute adjudgment.
61
00:02:10,480 –> 00:02:13,760
Because once the output is in the memo, it becomes the plan.
62
00:02:13,760 –> 00:02:16,800
Once it’s in the policy response, it becomes the answer.
63
00:02:16,800 –> 00:02:20,200
Once it’s in the incident summary, it becomes what happened.
64
00:02:20,200 –> 00:02:22,160
And now the organization has scaled an artifact
65
00:02:22,160 –> 00:02:24,240
without scaling the responsibility behind it.
66
00:02:24,240 –> 00:02:26,240
This is the foundational misunderstanding.
67
00:02:26,240 –> 00:02:28,520
With deterministic tools, the human decides,
68
00:02:28,520 –> 00:02:29,960
and the tool executes.
69
00:02:29,960 –> 00:02:31,880
With cognitive systems, the tool proposes,
70
00:02:31,880 –> 00:02:33,000
and the human must decide.
71
00:02:33,000 –> 00:02:35,320
If you don’t invert that relationship explicitly,
72
00:02:35,320 –> 00:02:38,200
Copilot becomes the silent decision-maker by default.
73
00:02:38,200 –> 00:02:41,080
And default decisions are how enterprises accumulate security
74
00:02:41,080 –> 00:02:43,120
debt, compliance debt, and reputational debt.
75
00:02:43,120 –> 00:02:46,000
So before talking about prompts, governance, or enablement,
76
00:02:46,000 –> 00:02:49,080
the first job is to define what the system actually is.
77
00:02:49,080 –> 00:02:50,920
Because if you keep calling cognition a tool,
78
00:02:50,920 –> 00:02:52,440
you’ll keep managing it like one,
79
00:02:52,440 –> 00:02:54,440
and you will keep outsourcing judgment
80
00:02:54,440 –> 00:02:58,280
until the organization forgets who is responsible for outcomes.
81
00:02:58,280 –> 00:03:01,080
Define cognitive collaboration without romance.
82
00:03:01,080 –> 00:03:03,600
Cognitive collaboration is the least exciting definition
83
00:03:03,600 –> 00:03:05,400
you can give it, which is why it’s useful.
84
00:03:05,400 –> 00:03:08,120
That means the AI generates candidate thinking
85
00:03:08,120 –> 00:03:09,880
and the human supplies, meaning constraints,
86
00:03:09,880 –> 00:03:11,080
and final responsibility.
87
00:03:11,080 –> 00:03:12,760
The machine expands the option space,
88
00:03:12,760 –> 00:03:15,240
the human collapses it into a decision.
89
00:03:15,240 –> 00:03:17,560
That distinction matters because most organizations
90
00:03:17,560 –> 00:03:20,120
are trying to get the machine to collapse the option space
91
00:03:20,120 –> 00:03:20,640
for them.
92
00:03:20,640 –> 00:03:22,160
They want the answer.
93
00:03:22,160 –> 00:03:24,480
They want a single clean output they can forward,
94
00:03:24,480 –> 00:03:25,800
approve, and move on from.
95
00:03:25,800 –> 00:03:29,080
That is the old tool contract trying to survive in a new system.
96
00:03:29,080 –> 00:03:31,160
In architectural terms, cognitive collaboration
97
00:03:31,160 –> 00:03:34,040
treats Copilot as a distributed idea generator.
98
00:03:34,040 –> 00:03:36,160
It proposes drafts, interpretations, summaries,
99
00:03:36,160 –> 00:03:38,720
and plans based on whatever context you allow it to see.
100
00:03:38,720 –> 00:03:40,240
Sometimes that context is strong.
101
00:03:40,240 –> 00:03:41,200
Often it’s weak.
102
00:03:41,200 –> 00:03:43,400
Either way, the output is not truth.
103
00:03:43,400 –> 00:03:44,960
It is a structured possibility.
104
00:03:44,960 –> 00:03:47,080
The human role is not to admire the output.
105
00:03:47,080 –> 00:03:48,120
It is to arbitrate.
106
00:03:48,120 –> 00:03:49,760
Arbitration sounds like a small word,
107
00:03:49,760 –> 00:03:51,440
but it’s a high friction job.
108
00:03:51,440 –> 00:03:53,560
Choosing what matters, rejecting what doesn’t,
109
00:03:53,560 –> 00:03:55,920
naming trade-offs and accepting consequences.
110
00:03:55,920 –> 00:03:57,080
That is judgment.
111
00:03:57,080 –> 00:03:59,080
And if you don’t make that explicit,
112
00:03:59,080 –> 00:04:02,280
you end up with a system that produces infinite plausible
113
00:04:02,280 –> 00:04:04,440
options and nobody willing to own the selection.
114
00:04:04,440 –> 00:04:07,080
So cognitive collaboration has four human responsibilities
115
00:04:07,080 –> 00:04:09,600
that do not disappear just because the pros looks finished.
116
00:04:09,600 –> 00:04:10,640
First intent.
117
00:04:10,640 –> 00:04:12,080
The AI cannot invent your intent.
118
00:04:12,080 –> 00:04:12,920
It can mimic one.
119
00:04:12,920 –> 00:04:14,000
It can infer one.
120
00:04:14,000 –> 00:04:16,400
It can generate something that sounds like intent.
121
00:04:16,400 –> 00:04:18,680
But unless you explicitly name what you are trying
122
00:04:18,680 –> 00:04:20,480
to accomplish and what you refuse to do,
123
00:04:20,480 –> 00:04:22,880
the output will drift toward generic correctness
124
00:04:22,880 –> 00:04:24,840
instead of your actual priorities.
125
00:04:24,840 –> 00:04:26,760
Second, framing.
126
00:04:26,760 –> 00:04:29,160
Framing is where you declare the problem boundary.
127
00:04:29,160 –> 00:04:31,080
What question are we answering for which audience
128
00:04:31,080 –> 00:04:33,800
under which constraints with what definition of success?
129
00:04:33,800 –> 00:04:36,560
Mostly to skip this because they think it’s overhead,
130
00:04:36,560 –> 00:04:39,560
then they wonder why the AI produces nice content
131
00:04:39,560 –> 00:04:42,240
that doesn’t survive contact with reality.
132
00:04:42,240 –> 00:04:43,280
Third, veto power.
133
00:04:43,280 –> 00:04:45,240
This is where most programs fail quietly.
134
00:04:45,240 –> 00:04:47,800
They train people to get outputs, not to reject them.
135
00:04:47,800 –> 00:04:50,160
A veto is not an emotional reaction.
136
00:04:50,160 –> 00:04:51,560
It is a disciplined refusal.
137
00:04:51,560 –> 00:04:53,000
This claim is unsupported.
138
00:04:53,000 –> 00:04:54,240
This risk is unowned.
139
00:04:54,240 –> 00:04:56,120
This recommendation violates policy.
140
00:04:56,120 –> 00:04:57,840
This tone creates legal exposure.
141
00:04:57,840 –> 00:04:59,280
This conclusion is premature.
142
00:04:59,280 –> 00:05:02,280
If you cannot veto AI output, you are not collaborating.
143
00:05:02,280 –> 00:05:03,440
You are complying.
144
00:05:03,440 –> 00:05:05,280
Fourth, escalation.
145
00:05:05,280 –> 00:05:07,640
When the output touches a high impact decision,
146
00:05:07,640 –> 00:05:09,800
the collaboration must force a human checkpoint,
147
00:05:09,800 –> 00:05:10,960
not a polite suggestion.
148
00:05:10,960 –> 00:05:12,280
A structural checkpoint.
149
00:05:12,280 –> 00:05:14,840
If a system can generate a plausible interpretation
150
00:05:14,840 –> 00:05:17,400
of a security incident or a plausible policy answer
151
00:05:17,400 –> 00:05:19,080
or a plausible change plan,
152
00:05:19,080 –> 00:05:21,160
you need an escalation path that says,
153
00:05:21,160 –> 00:05:23,640
this is the moment where a human must decide
154
00:05:23,640 –> 00:05:25,760
and a workflow must record who did.
155
00:05:25,760 –> 00:05:27,200
Now why does this feel uncomfortable?
156
00:05:27,200 –> 00:05:29,360
Because cognitive collaboration eliminates
157
00:05:29,360 –> 00:05:32,760
the illusion of a single right answer that you can outsource.
158
00:05:32,760 –> 00:05:34,440
Deterministic tools let you pretend
159
00:05:34,440 –> 00:05:36,880
the right configuration produces the right result.
160
00:05:36,880 –> 00:05:39,040
Cognitive systems don’t give you that comfort.
161
00:05:39,040 –> 00:05:42,280
They give you a spread of possibilities and a confidence tone.
162
00:05:42,280 –> 00:05:44,800
And they force you to admit the real truth of leadership.
163
00:05:44,800 –> 00:05:46,880
Decisions are made under uncertainty
164
00:05:46,880 –> 00:05:50,280
and responsibility cannot be delegated to a text box.
165
00:05:50,280 –> 00:05:53,120
So let’s be explicit about what cognitive collaboration is not.
166
00:05:53,120 –> 00:05:56,080
It is not an assistant because assistance can be held accountable
167
00:05:56,080 –> 00:05:58,360
for execution quality under your direction.
168
00:05:58,360 –> 00:05:59,920
Co-pilot cannot be held accountable.
169
00:05:59,920 –> 00:06:01,280
It has no duty of care.
170
00:06:01,280 –> 00:06:04,400
It is not an oracle because it has no privileged access to truth.
171
00:06:04,400 –> 00:06:07,200
It has access to tokens, patterns, and your data.
172
00:06:07,200 –> 00:06:09,000
Sometimes that correlates with reality.
173
00:06:09,000 –> 00:06:10,200
Sometimes it doesn’t.
174
00:06:10,200 –> 00:06:13,040
It is not an intern because an intern learns your context
175
00:06:13,040 –> 00:06:15,960
through consequences, feedback, and social correction.
176
00:06:15,960 –> 00:06:17,760
Co-pilot does not learn your culture
177
00:06:17,760 –> 00:06:20,240
unless you deliberately designed context, constraints,
178
00:06:20,240 –> 00:06:21,320
and enforcement around it.
179
00:06:21,320 –> 00:06:22,560
And it is not an autopilot.
180
00:06:22,560 –> 00:06:24,720
Autopilot’s operate inside engineered envelopes
181
00:06:24,720 –> 00:06:26,400
with explicit failover behavior.
182
00:06:26,400 –> 00:06:28,160
Co-pilot operates inside language.
183
00:06:28,160 –> 00:06:29,640
Language is not an envelope.
184
00:06:29,640 –> 00:06:31,120
Language is a persuasion layer.
185
00:06:31,120 –> 00:06:32,880
So the clean definition is this.
186
00:06:32,880 –> 00:06:35,720
Cognitive collaboration is a human control decision loop
187
00:06:35,720 –> 00:06:38,240
where the AI proposes and the human disposes.
188
00:06:38,240 –> 00:06:40,280
If you want to see whether your organization actually
189
00:06:40,280 –> 00:06:44,400
believes that, ask one question in any co-pilot rollout meeting.
190
00:06:44,400 –> 00:06:46,480
When co-pilot produces a plausible output
191
00:06:46,480 –> 00:06:49,400
that leads to a bad outcome, who owns that outcome?
192
00:06:49,400 –> 00:06:52,440
If the answer is the user, but the design doesn’t force ownership,
193
00:06:52,440 –> 00:06:55,760
you have just described outsourced judgment with nicer branding.
194
00:06:55,760 –> 00:06:57,800
And that’s where the cost starts.
195
00:06:57,800 –> 00:07:02,160
The cost curve, AI scales ambiguity faster than capability.
196
00:07:02,160 –> 00:07:03,920
Here’s what most leaders miss.
197
00:07:03,920 –> 00:07:05,760
AI doesn’t scale competence first.
198
00:07:05,760 –> 00:07:08,760
It scales whatever is already true in your environment.
199
00:07:08,760 –> 00:07:10,920
If your data is messy, it scales mess.
200
00:07:10,920 –> 00:07:13,560
If your decision rights are unclear, it scales ambiguity.
201
00:07:13,560 –> 00:07:15,440
If your culture avoids accountability,
202
00:07:15,440 –> 00:07:17,560
it scales plausible deniability.
203
00:07:17,560 –> 00:07:19,120
And because the output looks clean,
204
00:07:19,120 –> 00:07:21,600
the organization confuses polish with progress.
205
00:07:21,600 –> 00:07:22,720
That is the cost curve.
206
00:07:22,720 –> 00:07:25,080
AI scales ambiguity faster than capability
207
00:07:25,080 –> 00:07:27,280
because capability takes discipline and time
208
00:07:27,280 –> 00:07:28,920
and ambiguity takes nothing.
209
00:07:28,920 –> 00:07:30,640
You can deploy ambiguity in a week.
210
00:07:30,640 –> 00:07:32,400
The scaling effect is brutally simple.
211
00:07:32,400 –> 00:07:35,080
One unclear assumption, once embedded in a prompt,
212
00:07:35,080 –> 00:07:36,760
a template, a saved co-pilot page,
213
00:07:36,760 –> 00:07:38,240
or a recommended way of working
214
00:07:38,240 –> 00:07:41,160
gets replicated across hundreds or thousands of decisions.
215
00:07:41,160 –> 00:07:42,880
Not because people are malicious.
216
00:07:42,880 –> 00:07:43,960
Because people are busy.
217
00:07:43,960 –> 00:07:46,240
They see something that looks finished and they move.
218
00:07:46,240 –> 00:07:49,480
This is how plausible output becomes institutionalized.
219
00:07:49,480 –> 00:07:51,280
It starts with internal communications.
220
00:07:51,280 –> 00:07:54,280
A leader asks co-pilot to draft a message about a reogg,
221
00:07:54,280 –> 00:07:57,520
a security incident, a policy change, a new benefit.
222
00:07:57,520 –> 00:07:58,600
The output reads well.
223
00:07:58,600 –> 00:07:59,680
It sounds empathetic.
224
00:07:59,680 –> 00:08:00,760
It sounds decisive.
225
00:08:00,760 –> 00:08:03,840
It also quietly invents certainty where there isn’t any.
226
00:08:03,840 –> 00:08:04,760
It creates commitments.
227
00:08:04,760 –> 00:08:05,680
Nobody reviewed.
228
00:08:05,680 –> 00:08:06,680
It implies intent.
229
00:08:06,680 –> 00:08:07,720
Nobody approved.
230
00:08:07,720 –> 00:08:09,160
And because the language is coherent,
231
00:08:09,160 –> 00:08:10,640
it becomes the official story.
232
00:08:10,640 –> 00:08:12,560
Then it moves into policy and process.
233
00:08:12,560 –> 00:08:15,760
A manager uses co-pilot to answer an HR question
234
00:08:15,760 –> 00:08:17,520
or to explain a compliance requirement
235
00:08:17,520 –> 00:08:19,120
or to summarize an internal standard.
236
00:08:19,120 –> 00:08:20,160
The answer is plausible.
237
00:08:20,160 –> 00:08:20,920
It’s formatted.
238
00:08:20,920 –> 00:08:22,560
It’s easier than opening the actual policy.
239
00:08:22,560 –> 00:08:24,040
So the response gets forwarded.
240
00:08:24,040 –> 00:08:24,960
Then it gets copied.
241
00:08:24,960 –> 00:08:26,000
Then it becomes precedent.
242
00:08:26,000 –> 00:08:27,480
Now the organization has drifted
243
00:08:27,480 –> 00:08:29,040
without noticing it drifted.
244
00:08:29,040 –> 00:08:31,440
And then eventually it hits operational decisions.
245
00:08:31,440 –> 00:08:34,520
Incident triage summaries, change impact analyses,
246
00:08:34,520 –> 00:08:37,440
vendor risk writeups, forecast narratives, anything
247
00:08:37,440 –> 00:08:40,280
where a nice paragraph can substitute for actual thinking.
248
00:08:40,280 –> 00:08:42,720
At that point, the AI is not assisting.
249
00:08:42,720 –> 00:08:45,240
It is authoring the artifact that other humans will treat
250
00:08:45,240 –> 00:08:46,880
as evidence that judgment occurred.
251
00:08:46,880 –> 00:08:48,240
This is the rework tax.
252
00:08:48,240 –> 00:08:50,120
And it is the bill nobody budgets for.
253
00:08:50,120 –> 00:08:51,880
The rework tax shows up in three places.
254
00:08:51,880 –> 00:08:53,240
First, verification.
255
00:08:53,240 –> 00:08:54,600
You now have to spend time proving
256
00:08:54,600 –> 00:08:56,360
that the confident claims are grounded.
257
00:08:56,360 –> 00:08:57,880
That time is not optional.
258
00:08:57,880 –> 00:09:01,240
It is the cost of using a probabilistic system responsibly.
259
00:09:01,240 –> 00:09:03,080
But most organizations don’t allocate that time.
260
00:09:03,080 –> 00:09:06,320
So verification becomes ad hoc, personal, and uneven.
261
00:09:06,320 –> 00:09:07,440
Second, clean up.
262
00:09:07,440 –> 00:09:09,920
When the output is wrong or misaligned or legally risky,
263
00:09:09,920 –> 00:09:11,520
someone has to unwind it.
264
00:09:11,520 –> 00:09:13,640
That unwind rarely happens in the same meeting
265
00:09:13,640 –> 00:09:14,880
where the output was generated.
266
00:09:14,880 –> 00:09:17,760
It happens later under pressure with partial context,
267
00:09:17,760 –> 00:09:19,720
usually by someone who didn’t create the artifact
268
00:09:19,720 –> 00:09:23,000
and doesn’t have the authority to correct the underlying intent.
269
00:09:23,000 –> 00:09:25,760
Third, incident response and reputational repair.
270
00:09:25,760 –> 00:09:28,880
When AI generated content causes a downstream problem,
271
00:09:28,880 –> 00:09:30,440
you don’t just fix the content.
272
00:09:30,440 –> 00:09:31,200
You fix trust.
273
00:09:31,200 –> 00:09:32,640
You fix stakeholder confidence.
274
00:09:32,640 –> 00:09:33,960
You fix audit narratives.
275
00:09:33,960 –> 00:09:36,360
You fix the next set of questions that start with,
276
00:09:36,360 –> 00:09:37,920
how did this make it out the door?
277
00:09:37,920 –> 00:09:40,280
The hidden bill is cognitive load redistribution.
278
00:09:40,280 –> 00:09:42,760
AI shifts effort from creation to evaluation.
279
00:09:42,760 –> 00:09:44,880
That sounds like a win until you realize evaluation
280
00:09:44,880 –> 00:09:46,560
is harder to scale than creation.
281
00:09:46,560 –> 00:09:47,760
Creation can be delegated.
282
00:09:47,760 –> 00:09:49,640
Evaluation requires judgment context
283
00:09:49,640 –> 00:09:51,240
and the willingness to say no.
284
00:09:51,240 –> 00:09:52,800
It also requires time.
285
00:09:52,800 –> 00:09:55,160
And time is the one resource leadership will not admit
286
00:09:55,160 –> 00:09:56,240
is required.
287
00:09:56,240 –> 00:09:58,920
So the paradox appears, teams generate more outputs
288
00:09:58,920 –> 00:10:00,640
than ever, but decisions get worse.
289
00:10:00,640 –> 00:10:02,840
People feel busier, but outcomes feel less owned.
290
00:10:02,840 –> 00:10:04,800
The organization increases throughput,
291
00:10:04,800 –> 00:10:07,360
but confidence in what is true decreases.
292
00:10:07,360 –> 00:10:10,240
And this is where hallucination becomes a convenient distraction.
293
00:10:10,240 –> 00:10:11,640
Hallucination is visible.
294
00:10:11,640 –> 00:10:12,480
It’s a screenshot.
295
00:10:12,480 –> 00:10:13,480
It’s a gotcha.
296
00:10:13,480 –> 00:10:15,520
It’s something you can blame on the model.
297
00:10:15,520 –> 00:10:18,360
But the real risk is unowned outcomes.
298
00:10:18,360 –> 00:10:20,600
Unowned outcomes look like this.
299
00:10:20,600 –> 00:10:23,040
An AI generated summary becomes the record.
300
00:10:23,040 –> 00:10:26,720
An AI generated recommendation becomes the plan.
301
00:10:26,720 –> 00:10:30,080
An AI generated policy interpretation becomes the rule.
302
00:10:30,080 –> 00:10:32,760
And when it breaks, nobody can identify the decision moment
303
00:10:32,760 –> 00:10:35,400
where a human accepted the trade off and took responsibility.
304
00:10:35,400 –> 00:10:39,040
That is outsourced judgment, not because the AI is malicious,
305
00:10:39,040 –> 00:10:41,240
but because the organization designed a system
306
00:10:41,240 –> 00:10:44,240
where acceptance is effortless and ownership is optional.
307
00:10:44,240 –> 00:10:46,240
So the cost curve isn’t a technology curve.
308
00:10:46,240 –> 00:10:47,400
It’s an accountability curve.
309
00:10:47,400 –> 00:10:49,680
If you don’t embed judgment into the workflow,
310
00:10:49,680 –> 00:10:51,600
AI will scale the absence of it.
311
00:10:51,600 –> 00:10:53,720
And the organization will keep paying quietly
312
00:10:53,720 –> 00:10:55,440
through rework, drift, and incidents
313
00:10:55,440 –> 00:10:58,000
that feel unpredictable only because nobody wrote down
314
00:10:58,000 –> 00:10:59,200
who decided what mattered.
315
00:10:59,200 –> 00:11:01,160
Now we need to talk about why this happens even
316
00:11:01,160 –> 00:11:03,040
in mature organizations.
317
00:11:03,040 –> 00:11:06,440
The automation to augmentation story is a false letter.
318
00:11:06,440 –> 00:11:09,080
Automation, augmentation, collaboration, the false letter.
319
00:11:09,080 –> 00:11:11,680
Most organizations tell themselves a comforting story.
320
00:11:11,680 –> 00:11:14,400
Automation, then augmentation, then collaboration.
321
00:11:14,400 –> 00:11:16,640
And neat maturity letter, a linear climb.
322
00:11:16,640 –> 00:11:18,680
Get a few wins, build confidence,
323
00:11:18,680 –> 00:11:21,440
and eventually you collaborate with AI.
324
00:11:21,440 –> 00:11:23,120
That story is wrong in a specific way.
325
00:11:23,120 –> 00:11:25,840
It assumes the end state is just a more advanced version
326
00:11:25,840 –> 00:11:30,160
of the start state, the same work done faster, with better tooling.
327
00:11:30,160 –> 00:11:32,560
But collaboration isn’t more augmentation.
328
00:11:32,560 –> 00:11:34,080
It’s a different operating model.
329
00:11:34,080 –> 00:11:35,560
And if you treat it like a ladder,
330
00:11:35,560 –> 00:11:38,320
you will stall exactly where the responsibility shifts.
331
00:11:38,320 –> 00:11:41,280
Automation is the cleanest and most honest form of value.
332
00:11:41,280 –> 00:11:43,440
You define a rule, a boundary, and an outcome.
333
00:11:43,440 –> 00:11:44,960
A workflow roots approvals.
334
00:11:44,960 –> 00:11:46,400
A system enforces a control.
335
00:11:46,400 –> 00:11:48,360
A runbook remediates a known condition.
336
00:11:48,360 –> 00:11:51,400
The RIC stays crisp because the system isn’t deciding what matters.
337
00:11:51,400 –> 00:11:53,560
It’s executing what you already decided matters.
338
00:11:53,560 –> 00:11:54,880
That is why automation scales.
339
00:11:54,880 –> 00:11:57,040
It turns intent into repeatable consequence.
340
00:11:57,040 –> 00:11:59,720
Augmentation is where the organization starts to get addicted.
341
00:11:59,720 –> 00:12:01,280
Augmentation feels like free speed.
342
00:12:01,280 –> 00:12:03,720
It drafts the email, summarizes the meeting,
343
00:12:03,720 –> 00:12:06,160
generates the slide outline, cleans up the spreadsheet,
344
00:12:06,160 –> 00:12:07,640
writes the status update.
345
00:12:07,640 –> 00:12:11,120
The human still owns the work, but the machine removes friction.
346
00:12:11,120 –> 00:12:13,920
And because the outcomes are low stakes and reversible,
347
00:12:13,920 –> 00:12:16,560
nobody has to confront the accountability problem yet.
348
00:12:16,560 –> 00:12:18,640
If the draft is mediocre, you edit it.
349
00:12:18,640 –> 00:12:20,480
If the summary misses a point, you correct it.
350
00:12:20,480 –> 00:12:23,000
The organization can pretend this is just productivity.
351
00:12:23,000 –> 00:12:25,680
Collaboration is where that pretend contract dies.
352
00:12:25,680 –> 00:12:28,600
Because collaboration is not helping you do the task.
353
00:12:28,600 –> 00:12:31,720
It is co-producing the thinking artifacts that other people will treat
354
00:12:31,720 –> 00:12:33,920
as decisions, commitments and records.
355
00:12:33,920 –> 00:12:37,000
It changes how problems get framed, how options get generated,
356
00:12:37,000 –> 00:12:39,200
and how conclusions get justified.
357
00:12:39,200 –> 00:12:42,400
That means the human role shifts from creator to arbiter.
358
00:12:42,400 –> 00:12:44,720
And arbitration is where leadership discomfort shows up,
359
00:12:44,720 –> 00:12:47,040
because it can’t be delegated to a license.
360
00:12:47,040 –> 00:12:48,400
Here’s why leaders stall.
361
00:12:48,400 –> 00:12:50,960
They want speed without responsibility shifts.
362
00:12:50,960 –> 00:12:52,920
They want the upside of a cognition engine,
363
00:12:52,920 –> 00:12:56,680
without accepting that cognition generates ambiguity by default.
364
00:12:56,680 –> 00:13:01,080
They want outputs they can forward without having to own the trade-offs embedded in the words.
365
00:13:01,080 –> 00:13:03,720
They want the AI to behave like deterministic software,
366
00:13:03,720 –> 00:13:08,200
because deterministic software lets them enforce compliance with checkboxes and dashboards.
367
00:13:08,200 –> 00:13:10,760
But a probabilistic system doesn’t obey checkboxes.
368
00:13:10,760 –> 00:13:12,560
It obeys incentives context and neglect.
369
00:13:12,560 –> 00:13:15,400
So what happens in real deployments is not a smooth climb.
370
00:13:15,400 –> 00:13:16,480
It’s a sideways slide.
371
00:13:16,480 –> 00:13:20,560
Organizations skip over collaboration by pretending they’re still in augmentation.
372
00:13:20,560 –> 00:13:23,040
They keep the language of assist and productivity,
373
00:13:23,040 –> 00:13:26,640
but they start using AI in places where the output functions as judgment.
374
00:13:26,640 –> 00:13:29,920
Policies, incident narratives, change impact statements,
375
00:13:29,920 –> 00:13:32,760
executive summaries that become direction by implication.
376
00:13:32,760 –> 00:13:34,960
And because nobody redefined accountability,
377
00:13:34,960 –> 00:13:39,120
the organization quietly converts drafting into deciding.
378
00:13:39,120 –> 00:13:41,520
This is why co-pilot exposure is so uncomfortable.
379
00:13:41,520 –> 00:13:43,280
It doesn’t just reveal data chaos.
380
00:13:43,280 –> 00:13:44,640
It reveals thinking chaos.
381
00:13:44,640 –> 00:13:46,320
Co-pilot can’t fix fuzzy intent.
382
00:13:46,320 –> 00:13:48,760
It can only generate fluent substitutes for it.
383
00:13:48,760 –> 00:13:51,600
If you give it a vague goal like improve customer experience,
384
00:13:51,600 –> 00:13:54,600
it will produce a plausible plan that sounds like every other plan.
385
00:13:54,600 –> 00:13:56,560
If you give it conflicting constraints,
386
00:13:56,560 –> 00:13:59,120
it will produce a compromise that nobody chose.
387
00:13:59,120 –> 00:14:01,760
If you give it a political situation you refuse to name,
388
00:14:01,760 –> 00:14:05,000
it will generate language that avoids conflict while creating risk.
389
00:14:05,000 –> 00:14:06,320
The AI isn’t failing there.
390
00:14:06,320 –> 00:14:09,560
You are watching the organization’s unspoken assumptions leak into output.
391
00:14:09,560 –> 00:14:12,440
And once that output is shared, it becomes a social object.
392
00:14:12,440 –> 00:14:14,200
People react to it as if it is real.
393
00:14:14,200 –> 00:14:15,120
They align against it.
394
00:14:15,120 –> 00:14:16,320
They execute against it.
395
00:14:16,320 –> 00:14:17,200
They quote it.
396
00:14:17,200 –> 00:14:18,720
They treat it as their decision.
397
00:14:18,720 –> 00:14:19,960
And then when it causes harm,
398
00:14:19,960 –> 00:14:21,960
everyone looks for the person who approved it.
399
00:14:21,960 –> 00:14:23,760
That person often doesn’t exist.
400
00:14:23,760 –> 00:14:25,920
This is the moment where the false ladder matters.
401
00:14:25,920 –> 00:14:27,760
Automation has explicit decision points.
402
00:14:27,760 –> 00:14:30,120
Augmentation hides them because stakes are low.
403
00:14:30,120 –> 00:14:32,520
Collaboration requires you to reintroduce them on purpose
404
00:14:32,520 –> 00:14:35,840
because stakes are high and outputs look finished even when the thinking isn’t.
405
00:14:35,840 –> 00:14:38,000
So the practical diagnostic is simple.
406
00:14:38,000 –> 00:14:40,760
Where does your organization force a human to declare?
407
00:14:40,760 –> 00:14:43,040
I accept this conclusion and its consequences.
408
00:14:43,040 –> 00:14:45,960
If the answer is nowhere, you are not doing collaboration.
409
00:14:45,960 –> 00:14:48,760
You are doing outsourced judgment with better formatting.
410
00:14:48,760 –> 00:14:51,680
And if leaders keep insisting it’s just a productivity tool,
411
00:14:51,680 –> 00:14:52,800
they are not being neutral.
412
00:14:52,800 –> 00:14:54,320
They are choosing a design.
413
00:14:54,320 –> 00:14:56,480
A system where decisions happen by drift,
414
00:14:56,480 –> 00:14:57,960
that is not a maturity ladder,
415
00:14:57,960 –> 00:15:00,240
that is architectural erosion in motion.
416
00:15:00,240 –> 00:15:01,720
Mental model to unlearn.
417
00:15:01,720 –> 00:15:03,200
AI gives answers.
418
00:15:03,200 –> 00:15:05,560
The next mistake is the one leaders defend the hardest
419
00:15:05,560 –> 00:15:07,080
because it feels efficient.
420
00:15:07,080 –> 00:15:08,120
AI gives answers.
421
00:15:08,120 –> 00:15:08,800
They are wrong.
422
00:15:08,800 –> 00:15:10,960
AI gives options that look like answers.
423
00:15:10,960 –> 00:15:14,640
And that distinction matters because answer shaped language triggers closure.
424
00:15:14,640 –> 00:15:17,520
You stop checking, stop challenging, stop asking what changed.
425
00:15:17,520 –> 00:15:19,720
So run the self-test that leadership avoids.
426
00:15:19,720 –> 00:15:21,800
If this AI output turns out to be wrong,
427
00:15:21,800 –> 00:15:23,200
who gets called into the room,
428
00:15:23,200 –> 00:15:25,360
an answer is something you can stake an outcome on.
429
00:15:25,360 –> 00:15:27,080
An answer has an implied warranty.
430
00:15:27,080 –> 00:15:28,840
AI outputs don’t come with warranties.
431
00:15:28,840 –> 00:15:30,200
They come with plausible structure
432
00:15:30,200 –> 00:15:32,200
and plausible is not a quality standard.
433
00:15:32,200 –> 00:15:33,480
It’s a warning label.
434
00:15:33,480 –> 00:15:35,000
The uncomfortable truth is this.
435
00:15:35,000 –> 00:15:37,480
Copilot is a synthesis engine, not a truth engine.
436
00:15:37,480 –> 00:15:40,440
It compresses context into a narrative that reads well.
437
00:15:40,440 –> 00:15:42,560
The moment you treat that narrative as an answer,
438
00:15:42,560 –> 00:15:44,120
you invert responsibility.
439
00:15:44,120 –> 00:15:46,120
You turn the tool into the decision maker
440
00:15:46,120 –> 00:15:47,680
and the human into the forwarder
441
00:15:47,680 –> 00:15:49,160
and forwarding is not leadership.
442
00:15:49,160 –> 00:15:51,360
Copilot can be wrong without hallucinating.
443
00:15:51,360 –> 00:15:53,240
It just needs missing context,
444
00:15:53,240 –> 00:15:56,120
which is the default state of every organization.
445
00:15:56,120 –> 00:15:58,800
So the right mental model is not AI gives answers.
446
00:15:58,800 –> 00:16:01,040
It’s AI generates hypotheses.
447
00:16:01,040 –> 00:16:03,760
If an AI output can move money, change access,
448
00:16:03,760 –> 00:16:06,720
create a commitment, set policy, or become a record,
449
00:16:06,720 –> 00:16:08,600
it is not an answer.
450
00:16:08,600 –> 00:16:10,640
It is a proposal that must be owned.
451
00:16:10,640 –> 00:16:14,560
Owned means someone has to say explicitly, “I accept this.”
452
00:16:14,560 –> 00:16:16,240
This is where answer shaped text
453
00:16:16,240 –> 00:16:18,120
does damage at the leadership level.
454
00:16:18,120 –> 00:16:19,480
Executives live in abstraction.
455
00:16:19,480 –> 00:16:21,880
They already operate through summaries, status reports,
456
00:16:21,880 –> 00:16:22,880
and slide decks.
457
00:16:22,880 –> 00:16:26,360
So an AI generated executive summary feels like a perfect fit.
458
00:16:26,360 –> 00:16:28,920
Faster, cleaner, more comprehensive.
459
00:16:28,920 –> 00:16:30,360
Except it isn’t more comprehensive.
460
00:16:30,360 –> 00:16:31,600
It is more confident.
461
00:16:31,600 –> 00:16:33,720
And confidence triggers premature closure.
462
00:16:33,720 –> 00:16:35,840
Premature closure is a well-known failure mode
463
00:16:35,840 –> 00:16:37,040
in high stakes work.
464
00:16:37,040 –> 00:16:39,080
You accept the first coherent explanation
465
00:16:39,080 –> 00:16:41,400
and stop exploring alternatives.
466
00:16:41,400 –> 00:16:44,160
In an AI context, premature closure happens
467
00:16:44,160 –> 00:16:46,680
when the output sounds complete enough to ship.
468
00:16:46,680 –> 00:16:49,280
The organization confuses completion with correctness.
469
00:16:49,280 –> 00:16:50,480
So the posture has to change.
470
00:16:50,480 –> 00:16:53,080
Healthy posture is to treat AI outputs as drafts
471
00:16:53,080 –> 00:16:53,960
and hypotheses.
472
00:16:53,960 –> 00:16:56,440
Unhealthy posture is to treat them as decisions.
473
00:16:56,440 –> 00:16:57,840
The difference is not philosophical.
474
00:16:57,840 –> 00:16:59,840
The difference is whether your process forces
475
00:16:59,840 –> 00:17:00,840
a judgment moment.
476
00:17:00,840 –> 00:17:03,040
A judgment moment is the point where a human must do
477
00:17:03,040 –> 00:17:03,920
three things.
478
00:17:03,920 –> 00:17:06,760
Name the intent, name the trade off, and name the owner.
479
00:17:06,760 –> 00:17:09,480
If the output can’t survive those three sentences,
480
00:17:09,480 –> 00:17:12,280
it has no business becoming institutional truth.
481
00:17:12,280 –> 00:17:14,560
And here’s the final part leaders don’t like.
482
00:17:14,560 –> 00:17:17,320
If AI gives options, then someone has to decide
483
00:17:17,320 –> 00:17:18,240
what matters.
484
00:17:18,240 –> 00:17:19,400
Not what is possible.
485
00:17:19,400 –> 00:17:20,240
What matters?
486
00:17:20,240 –> 00:17:21,520
That’s the gap AI exposes.
487
00:17:21,520 –> 00:17:23,600
It will happily give you 10 plausible paths.
488
00:17:23,600 –> 00:17:24,840
It will even rank them.
489
00:17:24,840 –> 00:17:27,440
But it cannot carry the moral weight of choosing one path
490
00:17:27,440 –> 00:17:29,880
over another in your context with your constraints,
491
00:17:29,880 –> 00:17:32,040
with your politics and with your risk appetite.
492
00:17:32,040 –> 00:17:34,800
So when a leader asks, what does the AI say?
493
00:17:34,800 –> 00:17:36,400
They are not asking a neutral question.
494
00:17:36,400 –> 00:17:37,800
They are trying to transfer ownership
495
00:17:37,800 –> 00:17:39,240
to an unownable system.
496
00:17:39,240 –> 00:17:41,520
Everything clicked for most experienced architects
497
00:17:41,520 –> 00:17:42,920
when they realized this.
498
00:17:42,920 –> 00:17:44,880
The cost of AI isn’t hallucination.
499
00:17:44,880 –> 00:17:47,520
The cost is decision avoidance with better grammar.
500
00:17:47,520 –> 00:17:50,360
And once you see that, the next failure mode becomes obvious.
501
00:17:50,360 –> 00:17:52,840
Leaders respond by trying to brute force better answers
502
00:17:52,840 –> 00:17:53,880
with more prompts.
503
00:17:53,880 –> 00:17:55,200
Mental model to unlearn.
504
00:17:55,200 –> 00:17:57,120
More prompts, it’s better results.
505
00:17:57,120 –> 00:17:58,880
So leaders discover the first trap.
506
00:17:58,880 –> 00:18:00,200
AI doesn’t give answers.
507
00:18:00,200 –> 00:18:01,920
It gives option shape text.
508
00:18:01,920 –> 00:18:04,080
And instead of changing the operating model,
509
00:18:04,080 –> 00:18:06,720
they reach for the only lever they can see.
510
00:18:06,720 –> 00:18:09,400
More prompts, more templates, more prompt engineering,
511
00:18:09,400 –> 00:18:11,320
more libraries, that’s not strategy.
512
00:18:11,320 –> 00:18:12,920
That’s avoidance with extra steps.
513
00:18:12,920 –> 00:18:15,720
Prompt volume is usually a proxy for unclear intent
514
00:18:15,720 –> 00:18:17,120
or missing authority.
515
00:18:17,120 –> 00:18:18,720
People keep rewriting prompts because they
516
00:18:18,720 –> 00:18:20,520
can’t state what they actually need
517
00:18:20,520 –> 00:18:23,600
or they refuse to commit to the trade-offs they’re asking for.
518
00:18:23,600 –> 00:18:26,240
So they keep nudging the system, hoping it will hand them
519
00:18:26,240 –> 00:18:28,360
an output that feels safe to forward.
520
00:18:28,360 –> 00:18:29,360
It won’t.
521
00:18:29,360 –> 00:18:32,200
So ask the self-test that collapses the whole fantasy.
522
00:18:32,200 –> 00:18:33,960
If you deleted all your prompts tomorrow,
523
00:18:33,960 –> 00:18:35,680
would the decision still exist?
524
00:18:35,680 –> 00:18:37,680
Prompting can’t replace problem definition.
525
00:18:37,680 –> 00:18:39,000
It can’t replace constraints.
526
00:18:39,000 –> 00:18:40,440
It can’t replace decision rights.
527
00:18:40,440 –> 00:18:43,920
A prompt is not a substitute for knowing what good looks like.
528
00:18:43,920 –> 00:18:46,560
When intent is coherent, you don’t need 30 prompts.
529
00:18:46,560 –> 00:18:48,040
You need one.
530
00:18:48,040 –> 00:18:50,240
When intent is fuzzy, you can prompt forever
531
00:18:50,240 –> 00:18:51,880
and still never converge because you
532
00:18:51,880 –> 00:18:54,840
build a culture of try again instead of decide.
533
00:18:54,840 –> 00:18:56,880
Executives love prompt libraries because they
534
00:18:56,880 –> 00:18:58,240
feel like governance.
535
00:18:58,240 –> 00:19:00,000
But a prompt library is not a control plane.
536
00:19:00,000 –> 00:19:01,120
It doesn’t enforce anything.
537
00:19:01,120 –> 00:19:04,160
It just makes it easier to generate more unknown artifacts
538
00:19:04,160 –> 00:19:06,840
faster, that is not maturity, that is industrialized
539
00:19:06,840 –> 00:19:07,920
ambiguity.
540
00:19:07,920 –> 00:19:09,640
And yes, prompts matter, of course they do.
541
00:19:09,640 –> 00:19:11,360
If you ask for garbage, you get garbage.
542
00:19:11,360 –> 00:19:13,520
If you give no context, you get generic output.
543
00:19:13,520 –> 00:19:14,880
That’s basic system behavior.
544
00:19:14,880 –> 00:19:17,080
But what matters more is whether the organization
545
00:19:17,080 –> 00:19:19,840
has discipline around three things that prompting can’t
546
00:19:19,840 –> 00:19:20,320
fix.
547
00:19:20,320 –> 00:19:21,640
First, explicit constraints.
548
00:19:21,640 –> 00:19:23,040
Not be compliant.
549
00:19:23,040 –> 00:19:25,760
Actual constraints, which policy governs, which data is
550
00:19:25,760 –> 00:19:28,040
authoritative, which audience is allowed, which risk
551
00:19:28,040 –> 00:19:30,640
threshold applies, what you refuse to claim, what you refuse
552
00:19:30,640 –> 00:19:31,520
to commit to.
553
00:19:31,520 –> 00:19:32,880
Second, decision ownership.
554
00:19:32,880 –> 00:19:34,720
Who can accept the output as actionable?
555
00:19:34,720 –> 00:19:35,480
Who can approve?
556
00:19:35,480 –> 00:19:36,360
Who can override?
557
00:19:36,360 –> 00:19:37,760
Who gets blamed when it’s wrong?
558
00:19:37,760 –> 00:19:39,760
If that sounds harsh, good, harsh is reality.
559
00:19:39,760 –> 00:19:41,360
Accountability is not a vibe.
560
00:19:41,360 –> 00:19:42,960
Third, enforced convergence.
561
00:19:42,960 –> 00:19:45,320
Where does the workflow force a human to stop generating
562
00:19:45,320 –> 00:19:46,360
options and pick one?
563
00:19:46,360 –> 00:19:48,920
If you don’t have that, your organization will live inside
564
00:19:48,920 –> 00:19:49,680
drafts forever.
565
00:19:49,680 –> 00:19:52,760
It will generate more content and fewer decisions.
566
00:19:52,760 –> 00:19:55,160
This is why prompt obsession maps so cleanly
567
00:19:55,160 –> 00:19:56,400
to leadership weakness.
568
00:19:56,400 –> 00:19:58,320
Strategy fuzziness becomes prompt fuzziness.
569
00:19:58,320 –> 00:20:00,280
Unclear priorities become prompt bloat.
570
00:20:00,280 –> 00:20:02,400
Political avoidance becomes prompt gymnastics.
571
00:20:02,400 –> 00:20:04,880
The output looks professional, but the thinking stays
572
00:20:04,880 –> 00:20:05,640
uncommitted.
573
00:20:05,640 –> 00:20:08,920
So the mindset to unlearn is not prompts are useful.
574
00:20:08,920 –> 00:20:10,800
It’s the belief that prompting is the work.
575
00:20:10,800 –> 00:20:12,280
It isn’t.
576
00:20:12,280 –> 00:20:14,440
The work is judgment, framing the problem,
577
00:20:14,440 –> 00:20:17,160
declaring constraints and owning consequences.
578
00:20:17,160 –> 00:20:19,120
Prompts are just how you ask the cognition engine
579
00:20:19,120 –> 00:20:21,560
to propose possibilities inside the box you should have built
580
00:20:21,560 –> 00:20:22,360
first.
581
00:20:22,360 –> 00:20:24,200
And if you don’t build the box, co-pilot
582
00:20:24,200 –> 00:20:27,360
will happily build one for you, out of whatever it can infer,
583
00:20:27,360 –> 00:20:29,000
which means you didn’t get a better result.
584
00:20:29,000 –> 00:20:31,000
You got a better looking substitute for a decision
585
00:20:31,000 –> 00:20:32,240
you avoided.
586
00:20:32,240 –> 00:20:35,080
Mental model to unlearn will train users later.
587
00:20:35,080 –> 00:20:36,800
After prompt obsession, the next lie
588
00:20:36,800 –> 00:20:38,440
shows up as a scheduling decision.
589
00:20:38,440 –> 00:20:41,040
We’ll train users later.
590
00:20:41,040 –> 00:20:42,800
Later is where responsibility goes to die,
591
00:20:42,800 –> 00:20:45,040
because the first two weeks are when habits form.
592
00:20:45,040 –> 00:20:48,240
That’s when people learn what works, what gets praised,
593
00:20:48,240 –> 00:20:50,360
what saves time, and what they can get away with.
594
00:20:50,360 –> 00:20:52,920
So ask the only question that matters in week two.
595
00:20:52,920 –> 00:20:55,320
What behavior did you reward in the first two weeks?
596
00:20:55,320 –> 00:20:57,760
AI doesn’t just create outputs in those first two weeks.
597
00:20:57,760 –> 00:20:59,680
It creates behavioral defaults.
598
00:20:59,680 –> 00:21:02,880
If the default is copy-paste co-pilot output into email,
599
00:21:02,880 –> 00:21:04,040
that becomes culture.
600
00:21:04,040 –> 00:21:06,720
If the default is forward the summary as the record,
601
00:21:06,720 –> 00:21:08,160
that becomes precedent.
602
00:21:08,160 –> 00:21:10,880
If the default is ask the AI to interpret policy
603
00:21:10,880 –> 00:21:13,840
so I don’t have to read it, that becomes the unofficial operating
604
00:21:13,840 –> 00:21:14,360
model.
605
00:21:14,360 –> 00:21:17,080
You don’t undo that with a training deck three months later.
606
00:21:17,080 –> 00:21:18,520
This is the uncomfortable truth.
607
00:21:18,520 –> 00:21:21,640
You don’t train AI adoption, you condition judgment behavior,
608
00:21:21,640 –> 00:21:23,800
and conditioning happens at the speed of reward.
609
00:21:23,800 –> 00:21:26,000
In most organizations, the reward is simple.
610
00:21:26,000 –> 00:21:26,640
Speed.
611
00:21:26,640 –> 00:21:29,280
You respond faster, you ship faster, you look productive.
612
00:21:29,280 –> 00:21:31,840
Nobody asks how you got there, because they like the output.
613
00:21:31,840 –> 00:21:34,200
So you keep doing it, then someone else copies your pattern.
614
00:21:34,200 –> 00:21:37,080
Then a manager asks why you aren’t using co-pilot the same way.
615
00:21:37,080 –> 00:21:39,520
And now you’ve scaled the behavior before you ever defined
616
00:21:39,520 –> 00:21:40,640
what good looks like.
617
00:21:40,640 –> 00:21:43,280
This is why AI literacy is a misleading label.
618
00:21:43,280 –> 00:21:46,200
AI literacy is knowing that the system is probabilistic,
619
00:21:46,200 –> 00:21:48,120
that it can be wrong, that sources matter,
620
00:21:48,120 –> 00:21:49,480
that sensitive data matters.
621
00:21:49,480 –> 00:21:52,480
Fine, necessary, not sufficient.
622
00:21:52,480 –> 00:21:54,280
Judgment literacy is something else.
623
00:21:54,280 –> 00:21:56,800
The ability to treat AI output as a proposal
624
00:21:56,800 –> 00:21:58,600
to interrogate it, to frame the decision,
625
00:21:58,600 –> 00:21:59,720
and to own the consequences.
626
00:21:59,720 –> 00:22:02,480
Judgment is the bottleneck, not prompts, not features,
627
00:22:02,480 –> 00:22:03,840
not licensing.
628
00:22:03,840 –> 00:22:06,360
And here’s the part leadership avoids saying out loud.
629
00:22:06,360 –> 00:22:09,400
They delegate adoption to admins and then blame users.
630
00:22:09,400 –> 00:22:13,120
They assign licenses, run a few enablement sessions,
631
00:22:13,120 –> 00:22:16,560
publish a prompt gallery, and call it change management.
632
00:22:16,560 –> 00:22:19,320
Then they act surprised when users do what humans always do
633
00:22:19,320 –> 00:22:20,920
with low friction systems.
634
00:22:20,920 –> 00:22:22,920
They optimize for speed and social safety.
635
00:22:22,920 –> 00:22:25,200
They avoid being wrong, they avoid being slow,
636
00:22:25,200 –> 00:22:27,960
they avoid taking responsibility for uncertainty.
637
00:22:27,960 –> 00:22:31,320
So they pace the AI output as is, because it sounds competent
638
00:22:31,320 –> 00:22:32,520
and it distributes blame.
639
00:22:32,520 –> 00:22:35,840
If the message is wrong, the user can quietly imply it was the tool.
640
00:22:35,840 –> 00:22:38,120
If the policy interpretation is wrong, they can say,
641
00:22:38,120 –> 00:22:39,600
that’s what Copilot said.
642
00:22:39,600 –> 00:22:42,120
If the incident summary is wrong, they can claim
643
00:22:42,120 –> 00:22:44,040
they were just relaying what the system produced.
644
00:22:44,040 –> 00:22:45,120
That’s not malice.
645
00:22:45,120 –> 00:22:47,120
That’s rational behavior inside a culture
646
00:22:47,120 –> 00:22:50,040
that punishes uncertainty and rewards throughput.
647
00:22:50,040 –> 00:22:53,160
So training later is really, we’ll let the system write
648
00:22:53,160 –> 00:22:56,080
our culture for two weeks and then hope a webinar fixes it.
649
00:22:56,080 –> 00:22:56,960
It won’t.
650
00:22:56,960 –> 00:22:59,600
Training has to be front loaded because early usage
651
00:22:59,600 –> 00:23:01,960
is where people learn what the organization tolerates.
652
00:23:01,960 –> 00:23:04,120
If nobody teaches verification discipline early,
653
00:23:04,120 –> 00:23:05,400
people won’t invent it later.
654
00:23:05,400 –> 00:23:08,400
If nobody teaches that sharing AI output requires framing,
655
00:23:08,400 –> 00:23:10,480
people won’t spontaneously start doing it.
656
00:23:10,480 –> 00:23:13,920
If nobody teaches that high stakes outputs require escalation,
657
00:23:13,920 –> 00:23:16,360
people will treat everything like a draft until the day
658
00:23:16,360 –> 00:23:19,880
it becomes evidence in an audit or an incident review.
659
00:23:19,880 –> 00:23:23,600
The other reason later fails is that AI is not a static feature set.
660
00:23:23,600 –> 00:23:25,600
It changes continuously, the model changes,
661
00:23:25,600 –> 00:23:29,160
the UI changes, new grounding mechanisms appear, new agents show up.
662
00:23:29,160 –> 00:23:31,120
So if you think training is a phase,
663
00:23:31,120 –> 00:23:33,160
you are treating an evolving cognition layer
664
00:23:33,160 –> 00:23:34,840
like an email client migration.
665
00:23:34,840 –> 00:23:36,120
That mindset is obsolete.
666
00:23:36,120 –> 00:23:37,840
Training isn’t enablement content,
667
00:23:37,840 –> 00:23:40,360
it’s behavioral guardrails with repetition.
668
00:23:40,360 –> 00:23:43,480
The minimum training that matters is not here are tips and tricks.
669
00:23:43,480 –> 00:23:45,880
It’s three rules that remove deniability.
670
00:23:45,880 –> 00:23:48,320
One, if you share AI output,
671
00:23:48,320 –> 00:23:51,720
you add one sentence that states your intent and confidence level.
672
00:23:51,720 –> 00:23:55,560
Draft for discussion, recommendation, pending validation,
673
00:23:55,560 –> 00:23:57,720
confirmed against policy X,
674
00:23:57,720 –> 00:24:00,440
anything that reattaches ownership to a human.
675
00:24:00,440 –> 00:24:03,400
Two, if the output affects access, money, policy,
676
00:24:03,400 –> 00:24:07,480
or risk posture, you escalate, not by etiquette, by design.
677
00:24:07,480 –> 00:24:11,360
Three, if you can’t explain why the output is correct,
678
00:24:11,360 –> 00:24:12,560
you’re not allowed to act on it.
679
00:24:12,560 –> 00:24:14,240
These rules feel strict because they are.
680
00:24:14,240 –> 00:24:17,400
Strictness is how you stop judgment from evaporating.
681
00:24:17,400 –> 00:24:18,760
So the unlearn is simple.
682
00:24:18,760 –> 00:24:21,680
You don’t get to postpone training in a probabilistic system.
683
00:24:21,680 –> 00:24:23,600
The system trains your users immediately.
684
00:24:23,600 –> 00:24:25,200
If you don’t define the discipline,
685
00:24:25,200 –> 00:24:26,760
the platform defines the habit,
686
00:24:26,760 –> 00:24:28,920
and the habit will always drift towards speed
687
00:24:28,920 –> 00:24:30,000
and away from ownership.
688
00:24:30,000 –> 00:24:31,560
And once that drift exists,
689
00:24:31,560 –> 00:24:34,120
governance becomes the next lie people tell themselves
690
00:24:34,120 –> 00:24:36,080
is mental model to unlearn.
691
00:24:36,080 –> 00:24:38,080
Governance slows innovation.
692
00:24:38,080 –> 00:24:40,120
Governance slows innovation is what people say
693
00:24:40,120 –> 00:24:41,600
when they want the benefits of scale
694
00:24:41,600 –> 00:24:43,040
without the cost of control.
695
00:24:43,040 –> 00:24:45,160
It is not a philosophy, it’s a confession.
696
00:24:45,160 –> 00:24:47,160
So ask the self-test before you repeat it
697
00:24:47,160 –> 00:24:48,800
in your next steering committee.
698
00:24:48,800 –> 00:24:50,440
Where does the system physically prevent
699
00:24:50,440 –> 00:24:51,840
a bad decision from shipping?
700
00:24:51,840 –> 00:24:54,360
In the tool era, you could get away with weak governance
701
00:24:54,360 –> 00:24:56,400
because most failures stayed local.
702
00:24:56,400 –> 00:24:57,960
A bad spreadsheet breaks a forecast,
703
00:24:57,960 –> 00:24:59,840
a bad workflow breaks a process.
704
00:24:59,840 –> 00:25:01,600
The blast radius is annoying but bounded.
705
00:25:01,600 –> 00:25:03,320
Cognitive systems don’t fail locally.
706
00:25:03,320 –> 00:25:04,640
They fail socially.
707
00:25:04,640 –> 00:25:07,040
They produce language that moves through the organization
708
00:25:07,040 –> 00:25:09,080
faster than any ticketing system,
709
00:25:09,080 –> 00:25:10,640
faster than any policy update,
710
00:25:10,640 –> 00:25:12,920
faster than any corrective communication.
711
00:25:12,920 –> 00:25:16,040
And once people repeat it, it becomes what we believe.
712
00:25:16,040 –> 00:25:18,800
So governance in an AI environment is not bureaucracy.
713
00:25:18,800 –> 00:25:20,040
It is entropy management,
714
00:25:20,040 –> 00:25:21,720
and if you refuse to manage entropy,
715
00:25:21,720 –> 00:25:24,320
you don’t get innovation, you get drift.
716
00:25:24,320 –> 00:25:25,840
Here’s the uncomfortable truth.
717
00:25:25,840 –> 00:25:27,440
Experimentation without guardrails
718
00:25:27,440 –> 00:25:28,960
is just uncontrolled variance.
719
00:25:28,960 –> 00:25:31,360
It creates a hundred ways of doing the same thing.
720
00:25:31,360 –> 00:25:33,880
None of them documented, none of them accountable.
721
00:25:33,880 –> 00:25:36,120
And all of them capable of becoming precedent.
722
00:25:36,120 –> 00:25:37,280
That is not a learning culture.
723
00:25:37,280 –> 00:25:39,240
That is an organization producing noise
724
00:25:39,240 –> 00:25:40,640
and calling it progress.
725
00:25:40,640 –> 00:25:43,400
This is where executives reach for the wrong kind of governance.
726
00:25:43,400 –> 00:25:45,040
They write an acceptable use policy.
727
00:25:45,040 –> 00:25:46,640
They do a quick training module.
728
00:25:46,640 –> 00:25:49,000
They publish a list of do’s and don’ts.
729
00:25:49,000 –> 00:25:52,120
They tell people not to paste confidential data into prompts
730
00:25:52,120 –> 00:25:54,160
and then they declare governance done.
731
00:25:54,160 –> 00:25:55,240
That’s not governance.
732
00:25:55,240 –> 00:25:57,000
That’s paperwork.
733
00:25:57,000 –> 00:26:00,000
Real governance in this context is enforced intent.
734
00:26:00,000 –> 00:26:03,160
Explicit decision rights, data boundaries, auditability
735
00:26:03,160 –> 00:26:06,360
and escalation paths that activate when the output matters.
736
00:26:06,360 –> 00:26:07,480
The key shift is this.
737
00:26:07,480 –> 00:26:09,360
Governance isn’t about controlling the model.
738
00:26:09,360 –> 00:26:11,400
It’s about controlling what the organization is allowed
739
00:26:11,400 –> 00:26:12,600
to treat as true.
740
00:26:12,600 –> 00:26:14,880
Because AI will generate plausible statements
741
00:26:14,880 –> 00:26:16,400
about anything you let it touch.
742
00:26:16,400 –> 00:26:18,760
Your policies, your customers, your risk posture,
743
00:26:18,760 –> 00:26:20,480
your incidents, your strategy.
744
00:26:20,480 –> 00:26:21,760
And if you don’t build a mechanism
745
00:26:21,760 –> 00:26:24,840
that forces humans to validate own and record decisions,
746
00:26:24,840 –> 00:26:27,720
those plausible statements turn into institutional behavior.
747
00:26:27,720 –> 00:26:30,240
Over time, policies drift away, missing policies
748
00:26:30,240 –> 00:26:32,920
create obvious gaps, drifting policies create ambiguity.
749
00:26:32,920 –> 00:26:34,600
Ambiguity is where accountability dies
750
00:26:34,600 –> 00:26:36,400
and AI accelerates that death
751
00:26:36,400 –> 00:26:38,520
because it produces reasonable language
752
00:26:38,520 –> 00:26:42,240
that fills the gap between what you meant and what you enforced.
753
00:26:42,240 –> 00:26:44,280
So let’s talk about the thing people really mean
754
00:26:44,280 –> 00:26:45,720
when they complain about governance.
755
00:26:45,720 –> 00:26:46,720
They mean friction.
756
00:26:46,720 –> 00:26:49,680
They mean someone might have to slow down, ask permission,
757
00:26:49,680 –> 00:26:52,040
document a rational or admit uncertainty.
758
00:26:52,040 –> 00:26:53,280
Yes, that’s the point.
759
00:26:54,800 –> 00:26:57,960
Innovation at enterprise scale is not fast output.
760
00:26:57,960 –> 00:26:59,160
It is safe change.
761
00:26:59,160 –> 00:27:01,160
It is the ability to try something new
762
00:27:01,160 –> 00:27:03,800
without destroying trust and trust requires constraints
763
00:27:03,800 –> 00:27:06,160
that hold under pressure, not vibes that collapse
764
00:27:06,160 –> 00:27:07,520
when the first incident happens.
765
00:27:07,520 –> 00:27:10,200
This is why shadow AI is not a user problem.
766
00:27:10,200 –> 00:27:12,480
It is the predictable outcome of weak enforcement.
767
00:27:12,480 –> 00:27:15,920
If official tooling feels slow and unofficial tooling feels easy,
768
00:27:15,920 –> 00:27:18,760
people will root around controls, not because they are evil,
769
00:27:18,760 –> 00:27:20,560
because the organization rewarded speed
770
00:27:20,560 –> 00:27:23,240
and didn’t enforce boundaries, the system behaved as designed.
771
00:27:23,240 –> 00:27:24,880
You just didn’t like the consequences.
772
00:27:24,880 –> 00:27:26,520
So the governance reframe is simple.
773
00:27:26,520 –> 00:27:28,480
Governance is how you scale trust.
774
00:27:28,480 –> 00:27:29,960
Trust isn’t a marketing claim.
775
00:27:29,960 –> 00:27:31,400
Trust is an operational property.
776
00:27:31,400 –> 00:27:33,320
It emerges when three things are true.
777
00:27:33,320 –> 00:27:34,960
First, boundaries are clear.
778
00:27:34,960 –> 00:27:37,320
What data is in scope, what is out of scope,
779
00:27:37,320 –> 00:27:39,000
and what must never be inferred.
780
00:27:39,000 –> 00:27:40,240
Confidential is not a boundary.
781
00:27:40,240 –> 00:27:41,080
It’s a label.
782
00:27:41,080 –> 00:27:42,800
Boundaries are enforced by access controls,
783
00:27:42,800 –> 00:27:44,920
data classification and workflow rules
784
00:27:44,920 –> 00:27:46,960
that prevent accidental leakage.
785
00:27:46,960 –> 00:27:48,640
Second, decision rights are explicit.
786
00:27:48,640 –> 00:27:51,000
Who can interpret policy, who can approve exceptions,
787
00:27:51,000 –> 00:27:52,360
who can change risk posture,
788
00:27:52,360 –> 00:27:54,240
who can accept blast radius.
789
00:27:54,240 –> 00:27:56,520
If an AI output touches one of those areas,
790
00:27:56,520 –> 00:27:59,400
the workflow must force a named human to accept it.
791
00:27:59,400 –> 00:28:01,360
Not a team, not the business,
792
00:28:01,360 –> 00:28:03,960
a human with a calendar and a manager.
793
00:28:03,960 –> 00:28:05,400
Third, evidence exists.
794
00:28:05,400 –> 00:28:07,720
Logs, decision records and audit trails
795
00:28:07,720 –> 00:28:09,680
that survive the next incident review.
796
00:28:09,680 –> 00:28:11,960
If you can’t reconstruct why a decision was made,
797
00:28:11,960 –> 00:28:12,960
you didn’t govern it.
798
00:28:12,960 –> 00:28:14,000
You hoped it would work.
799
00:28:14,000 –> 00:28:16,000
The weird part is that this kind of governance
800
00:28:16,000 –> 00:28:17,240
is not anti-innovation.
801
00:28:17,240 –> 00:28:19,360
It is what makes innovation repeatable.
802
00:28:19,360 –> 00:28:21,720
It turns experimentation into learning.
803
00:28:21,720 –> 00:28:24,080
Because learning requires traceability.
804
00:28:24,080 –> 00:28:24,920
What did we try?
805
00:28:24,920 –> 00:28:25,800
Why did we try it?
806
00:28:25,800 –> 00:28:27,680
What happened and who owned the trade-off?
807
00:28:27,680 –> 00:28:29,200
Without that, you don’t have innovation.
808
00:28:29,200 –> 00:28:31,560
You have uncontrolled behavior and selective memory.
809
00:28:31,560 –> 00:28:34,520
So when someone says governance slows innovation,
810
00:28:34,520 –> 00:28:36,440
the correct response is no.
811
00:28:36,440 –> 00:28:38,520
Governance slows fantasy.
812
00:28:38,520 –> 00:28:40,720
Thinking without enforcement is fantasy.
813
00:28:40,720 –> 00:28:43,600
Enforcement without thinking is bureaucracy.
814
00:28:43,600 –> 00:28:45,280
Your job is to build the middle,
815
00:28:45,280 –> 00:28:48,000
where AI can propose, humans can judge,
816
00:28:48,000 –> 00:28:49,960
and the system can enforce consequences
817
00:28:49,960 –> 00:28:51,600
with no deniability.
818
00:28:51,600 –> 00:28:54,920
The triad, cognition, action, judgment.
819
00:28:54,920 –> 00:28:56,400
So here’s the model that stops this
820
00:28:56,400 –> 00:28:57,960
from turning into a moral lecture
821
00:28:57,960 –> 00:29:00,440
about using AI responsibly.
822
00:29:00,440 –> 00:29:01,360
It’s a systems model.
823
00:29:01,360 –> 00:29:03,200
Three systems, three jobs, no romance,
824
00:29:03,200 –> 00:29:05,720
system of cognition, system of action, system of judgment.
825
00:29:05,720 –> 00:29:07,720
If you can’t name which one you’re operating in,
826
00:29:07,720 –> 00:29:08,760
you’re already drifting.
827
00:29:08,760 –> 00:29:10,800
And drift is how organizations end up
828
00:29:10,800 –> 00:29:13,040
in post-incident meetings pretending nobody
829
00:29:13,040 –> 00:29:14,000
could have seen it coming.
830
00:29:14,000 –> 00:29:15,520
Start with the system of cognition.
831
00:29:15,520 –> 00:29:17,400
This is where M365 Copilot lives.
832
00:29:17,400 –> 00:29:20,040
Chat, summaries, drafts, synthesis
833
00:29:20,040 –> 00:29:22,160
across email, meetings, documents,
834
00:29:22,160 –> 00:29:24,560
and whatever content the graph can legally expose.
835
00:29:24,560 –> 00:29:26,000
Cognition is not execution.
836
00:29:26,000 –> 00:29:28,040
Cognition is possibility generation,
837
00:29:28,040 –> 00:29:30,200
interpretations, options, narratives,
838
00:29:30,200 –> 00:29:31,440
and proposed next steps.
839
00:29:31,440 –> 00:29:33,840
It is fast, it is wide, it is persuasive.
840
00:29:33,840 –> 00:29:35,080
And it has one built in flow.
841
00:29:35,080 –> 00:29:36,560
It will produce coherent language
842
00:29:36,560 –> 00:29:38,680
even when the underlying reality is incoherent.
843
00:29:38,680 –> 00:29:40,760
That’s not a bug, that’s what it’s designed to do.
844
00:29:40,760 –> 00:29:42,280
So the output of cognition
845
00:29:42,280 –> 00:29:45,200
must always be treated as provisional, candidate thinking.
846
00:29:45,200 –> 00:29:47,440
A draft artifact that still needs judgment.
847
00:29:47,440 –> 00:29:50,000
If you treat cognition output as the decision,
848
00:29:50,000 –> 00:29:51,640
you have already collapsed the triad
849
00:29:51,640 –> 00:29:54,080
into a single fragile point of failure.
850
00:29:54,080 –> 00:29:55,200
The model said so.
851
00:29:55,200 –> 00:29:56,600
Now the system of action.
852
00:29:56,600 –> 00:29:58,320
This is where consequences get enforced.
853
00:29:58,320 –> 00:30:00,440
Workflows, approvals, access changes,
854
00:30:00,440 –> 00:30:03,720
containment actions, policy exceptions, audit trails.
855
00:30:03,720 –> 00:30:05,920
In most enterprises, this is some combination
856
00:30:05,920 –> 00:30:07,680
of ticketing, workflow engines,
857
00:30:07,680 –> 00:30:10,400
and the systems that actually touch production reality.
858
00:30:10,400 –> 00:30:11,880
Service now is an obvious example,
859
00:30:11,880 –> 00:30:13,000
but the brand doesn’t matter.
860
00:30:13,000 –> 00:30:13,960
The function matters.
861
00:30:13,960 –> 00:30:15,720
Action is the part of the organization
862
00:30:15,720 –> 00:30:18,080
that turns intent into irreversible effects.
863
00:30:18,080 –> 00:30:19,800
Action systems are allowed to be boring.
864
00:30:19,800 –> 00:30:20,480
They should be.
865
00:30:20,480 –> 00:30:22,800
They exist to create friction in the right places,
866
00:30:22,800 –> 00:30:25,040
to force acknowledgement, to root decisions,
867
00:30:25,040 –> 00:30:27,760
to authorised owners, to record evidence,
868
00:30:27,760 –> 00:30:29,280
to stop looks fine thinking
869
00:30:29,280 –> 00:30:31,400
from becoming production downtime.
870
00:30:31,400 –> 00:30:32,880
And then there is the system of judgment.
871
00:30:32,880 –> 00:30:34,880
This is the human layer, not as a slogan.
872
00:30:34,880 –> 00:30:37,040
As a constraint, the enterprise cannot escape.
873
00:30:37,040 –> 00:30:38,640
Judgment is where intent is framed,
874
00:30:38,640 –> 00:30:41,240
trade-offs are chosen, and responsibility is assigned
875
00:30:41,240 –> 00:30:43,360
to a person who can be held accountable.
876
00:30:43,360 –> 00:30:44,560
Judgment is the only system
877
00:30:44,560 –> 00:30:46,800
that can legitimately answer questions like,
878
00:30:46,800 –> 00:30:47,880
“What matters right now?
879
00:30:47,880 –> 00:30:49,040
“What is acceptable risk?
880
00:30:49,040 –> 00:30:49,880
“What is fair?
881
00:30:49,880 –> 00:30:50,800
“What is defensible?
882
00:30:50,800 –> 00:30:52,520
“What does the organization refuse to do
883
00:30:52,520 –> 00:30:54,080
“even if it is efficient?
884
00:30:54,080 –> 00:30:55,560
“AI cannot answer those questions.
885
00:30:55,560 –> 00:30:56,520
“They can mimic answers.
886
00:30:56,520 –> 00:30:58,600
“It can produce language that resembles them,
887
00:30:58,600 –> 00:31:00,080
“but it cannot own the consequences.
888
00:31:00,080 –> 00:31:01,560
“And if it can’t own consequences,
889
00:31:01,560 –> 00:31:03,080
“it is not judgment.
890
00:31:03,080 –> 00:31:04,520
“That distinction matters.”
891
00:31:04,520 –> 00:31:08,040
Because most failed AI strategies accidentally invert the triad.
892
00:31:08,040 –> 00:31:10,800
They treat the system of cognition as if it is judgment,
893
00:31:10,800 –> 00:31:12,280
and they treat the system of action
894
00:31:12,280 –> 00:31:13,800
as if it is optional bureaucracy.
895
00:31:13,800 –> 00:31:15,680
So they get lots of thinking artifacts
896
00:31:15,680 –> 00:31:17,440
and very little enforced reality.
897
00:31:17,440 –> 00:31:19,880
The organization becomes a factory of plausible language
898
00:31:19,880 –> 00:31:21,280
with no operational gravity.
899
00:31:21,280 –> 00:31:23,120
This is the diagnostic line you already have,
900
00:31:23,120 –> 00:31:24,320
and it is not negotiable.
901
00:31:24,320 –> 00:31:27,320
Thinking without enforcement is fantasy.
902
00:31:27,320 –> 00:31:29,120
Enforcement without thinking is bureaucracy.
903
00:31:29,120 –> 00:31:30,720
Now translate that into behavior.
904
00:31:30,720 –> 00:31:32,800
If copilot produces a summary of an incident,
905
00:31:32,800 –> 00:31:33,760
that’s cognition.
906
00:31:33,760 –> 00:31:35,360
If a human classifies severity
907
00:31:35,360 –> 00:31:37,840
and chooses a response posture, that’s judgment.
908
00:31:37,840 –> 00:31:39,840
If a workflow enforces containment steps,
909
00:31:39,840 –> 00:31:41,880
approvals, notifications, and evidence capture,
910
00:31:41,880 –> 00:31:42,560
that’s action.
911
00:31:42,560 –> 00:31:45,040
If any one of those three is missing, you don’t have a system.
912
00:31:45,040 –> 00:31:45,880
You have a vibe.
913
00:31:45,880 –> 00:31:47,480
Cognition without judgment is noise.
914
00:31:47,480 –> 00:31:49,160
It produces options nobody chooses.
915
00:31:49,160 –> 00:31:50,720
Judgment without action is theater.
916
00:31:50,720 –> 00:31:52,720
It produces decisions, nobody implements.
917
00:31:52,720 –> 00:31:54,360
Action without judgment is dangerous.
918
00:31:54,360 –> 00:31:56,360
It produces consequences, nobody intended.
919
00:31:56,360 –> 00:31:58,680
And here’s the part that makes this model operational,
920
00:31:58,680 –> 00:32:00,000
instead of philosophical.
921
00:32:00,000 –> 00:32:02,280
The hand-offs are where organizations fail.
922
00:32:02,280 –> 00:32:04,920
The output of cognition must enter a judgment moment.
923
00:32:04,920 –> 00:32:07,160
Not someone should review this.
924
00:32:07,160 –> 00:32:09,760
A design checkpoint where a named human must accept
925
00:32:09,760 –> 00:32:11,640
or reject the interpretation,
926
00:32:11,640 –> 00:32:13,800
and where the acceptance triggers action pathways
927
00:32:13,800 –> 00:32:15,280
with recorded ownership.
928
00:32:15,280 –> 00:32:17,200
That is how you stop outsourced judgment,
929
00:32:17,200 –> 00:32:18,920
not by telling people to be careful,
930
00:32:18,920 –> 00:32:20,280
not by publishing guidance,
931
00:32:20,280 –> 00:32:22,920
by engineering a reality where the organization cannot move
932
00:32:22,920 –> 00:32:24,160
from possible to done,
933
00:32:24,160 –> 00:32:26,360
without a human decision owner being visible.
934
00:32:26,360 –> 00:32:27,760
This also explains why governance
935
00:32:27,760 –> 00:32:30,200
that lives only in M365 feels weak.
936
00:32:30,200 –> 00:32:33,200
M365 can guide cognition, it can restrict data exposure,
937
00:32:33,200 –> 00:32:34,240
it can log usage,
938
00:32:34,240 –> 00:32:37,320
but it cannot, by itself, enforce enterprise consequences.
939
00:32:37,320 –> 00:32:39,360
That enforcement lives in action systems.
940
00:32:39,360 –> 00:32:40,600
That’s where you put the brakes.
941
00:32:40,600 –> 00:32:41,840
That’s where you put the receipts.
942
00:32:41,840 –> 00:32:44,000
So when you hear someone describe an AI strategy
943
00:32:44,000 –> 00:32:46,920
as we rolled out co-pilot and trained users,
944
00:32:46,920 –> 00:32:48,200
you should hear what they didn’t say.
945
00:32:48,200 –> 00:32:49,680
They didn’t say where judgment lives,
946
00:32:49,680 –> 00:32:51,600
they didn’t say where action gets enforced,
947
00:32:51,600 –> 00:32:54,000
they didn’t say how ownership is recorded,
948
00:32:54,000 –> 00:32:55,960
which means they built a cognition layer
949
00:32:55,960 –> 00:32:57,920
on top of an accountability vacuum.
950
00:32:57,920 –> 00:32:59,440
And now, to make this real,
951
00:32:59,440 –> 00:33:01,280
we’re going to run the triad through scenarios
952
00:33:01,280 –> 00:33:03,480
everyone recognizes, starting with the one
953
00:33:03,480 –> 00:33:06,880
that fails the fastest, security incident triage.
954
00:33:06,880 –> 00:33:09,080
Scenario one, security incident triage.
955
00:33:09,080 –> 00:33:11,200
Security incident triage is the cleanest place
956
00:33:11,200 –> 00:33:13,320
to watch this fail because the organization
957
00:33:13,320 –> 00:33:14,960
already believes it has process.
958
00:33:14,960 –> 00:33:16,440
It already believes it has governance,
959
00:33:16,440 –> 00:33:18,720
it already believes it has accountability.
960
00:33:18,720 –> 00:33:20,920
Then co-pilot shows up and turns analysis
961
00:33:20,920 –> 00:33:24,080
into a decorative layer that nobody operationalizes.
962
00:33:24,080 –> 00:33:26,200
Start with the system of cognition.
963
00:33:26,200 –> 00:33:28,480
Co-pilot can ingest the mess humans can’t.
964
00:33:28,480 –> 00:33:31,560
Alert summaries, email threads, teams chat fragments,
965
00:33:31,560 –> 00:33:33,920
meeting notes, defender notifications,
966
00:33:33,920 –> 00:33:35,920
a pile of half finished work items
967
00:33:35,920 –> 00:33:39,600
and the one key sentence someone dropped in a channel at 2.13 a.m.
968
00:33:39,600 –> 00:33:42,320
It can synthesize those signals into a narrative.
969
00:33:42,320 –> 00:33:43,840
What happened, what might be happening,
970
00:33:43,840 –> 00:33:45,360
what changed, who touched what
971
00:33:45,360 –> 00:33:47,480
and what it thinks the likely root cause is.
972
00:33:47,480 –> 00:33:49,040
That synthesis is valuable.
973
00:33:49,040 –> 00:33:50,200
It compresses time.
974
00:33:50,200 –> 00:33:53,320
It reduces the cognitive load of finding the story.
975
00:33:53,320 –> 00:33:56,360
It proposes interpretations a tired human might miss,
976
00:33:56,360 –> 00:33:57,640
but it is still cognition.
977
00:33:57,640 –> 00:33:59,080
It is still candidate thinking.
978
00:33:59,080 –> 00:34:01,680
So a well designed triage flow treats co-pilot output
979
00:34:01,680 –> 00:34:03,840
the same way it treats an analyst’s first pass.
980
00:34:03,840 –> 00:34:05,920
Provisional, biased by incomplete data
981
00:34:05,920 –> 00:34:09,040
and requiring explicit judgment before consequence.
982
00:34:09,040 –> 00:34:11,120
The system of judgment is where a human does the work
983
00:34:11,120 –> 00:34:12,680
nobody wants to admit his work.
984
00:34:12,680 –> 00:34:14,040
Severity is not a number.
985
00:34:14,040 –> 00:34:15,600
It is an organizational decision
986
00:34:15,600 –> 00:34:18,720
about blast radius, tolerable risk, regulatory exposure
987
00:34:18,720 –> 00:34:19,840
and response posture.
988
00:34:19,840 –> 00:34:21,160
Intent is not a pattern match.
989
00:34:21,160 –> 00:34:23,280
Intent is a claim you will defend later
990
00:34:23,280 –> 00:34:25,360
in a post-incident review in an audit
991
00:34:25,360 –> 00:34:27,320
and possibly in a legal conversation.
992
00:34:27,320 –> 00:34:31,000
So the judgment step is where someone has to say out loud,
993
00:34:31,000 –> 00:34:32,720
this is a probable fishing compromise
994
00:34:32,720 –> 00:34:34,400
or this is credential stuffing
995
00:34:34,400 –> 00:34:36,360
or this is an internal misconfiguration
996
00:34:36,360 –> 00:34:37,800
with external symptoms.
997
00:34:37,800 –> 00:34:39,640
And then commit to a response posture.
998
00:34:39,640 –> 00:34:41,560
Contain now an accept disruption
999
00:34:41,560 –> 00:34:43,960
or observe longer and accept uncertainty.
1000
00:34:43,960 –> 00:34:45,560
That isn’t something co-pilot can own.
1001
00:34:45,560 –> 00:34:46,640
That is a human decision
1002
00:34:46,640 –> 00:34:48,880
because the organization is the one paying the trade off.
1003
00:34:48,880 –> 00:34:51,320
Now the system of action has to make the decision real.
1004
00:34:51,320 –> 00:34:53,040
This is where most organizations fail.
1005
00:34:53,040 –> 00:34:56,000
They let cognition produce a beautiful triage summary.
1006
00:34:56,000 –> 00:34:57,840
They let a human nod at it
1007
00:34:57,840 –> 00:35:00,760
and then they drop back into chat driven execution.
1008
00:35:00,760 –> 00:35:02,640
Can someone reset the account?
1009
00:35:02,640 –> 00:35:04,400
Did we block the IP?
1010
00:35:04,400 –> 00:35:06,160
Who owns comms?
1011
00:35:06,160 –> 00:35:08,040
Are we notifying legal?
1012
00:35:08,040 –> 00:35:09,880
All those questions are action questions
1013
00:35:09,880 –> 00:35:11,800
and if they aren’t enforced through workflow
1014
00:35:11,800 –> 00:35:13,400
they become optional under pressure.
1015
00:35:13,400 –> 00:35:15,920
So the action system needs to compile consequence.
1016
00:35:15,920 –> 00:35:17,920
Severity classification triggers a required
1017
00:35:17,920 –> 00:35:20,160
containment playbook, required approvals,
1018
00:35:20,160 –> 00:35:23,040
required communications, required evidence collection
1019
00:35:23,040 –> 00:35:25,440
and required post-incident review.
1020
00:35:25,440 –> 00:35:27,160
Not because service now is magic
1021
00:35:27,160 –> 00:35:28,720
because enforcement is the only way
1022
00:35:28,720 –> 00:35:30,240
to prevent the organization
1023
00:35:30,240 –> 00:35:33,000
from improvising itself into inconsistency.
1024
00:35:33,000 –> 00:35:35,200
Here’s what the failure mode looks like in the real world.
1025
00:35:35,200 –> 00:35:36,800
Co-pilot flags risk.
1026
00:35:36,800 –> 00:35:39,280
It surfaces three plausible interpretations.
1027
00:35:39,280 –> 00:35:41,520
Everyone agrees it looks concerning.
1028
00:35:41,520 –> 00:35:43,200
And then nothing is enforced.
1029
00:35:43,200 –> 00:35:44,960
No one is named as the decision owner.
1030
00:35:44,960 –> 00:35:46,920
No one records the rationale for why
1031
00:35:46,920 –> 00:35:49,440
it was treated as low severity versus high.
1032
00:35:49,440 –> 00:35:51,600
No one triggers mandatory containment steps.
1033
00:35:51,600 –> 00:35:54,240
So the organization generates a lot of analysis artifacts
1034
00:35:54,240 –> 00:35:57,280
and a lot of chat messages, but no operational gravity.
1035
00:35:57,280 –> 00:35:58,240
Alert fatigue scales
1036
00:35:58,240 –> 00:36:00,240
because if every alert gets a better narrative
1037
00:36:00,240 –> 00:36:02,240
but still doesn’t produce enforced decisions
1038
00:36:02,240 –> 00:36:04,840
you’ve improved the story and preserved the paralysis.
1039
00:36:04,840 –> 00:36:06,440
People stop trusting the summaries
1040
00:36:06,440 –> 00:36:07,360
not because they’re wrong
1041
00:36:07,360 –> 00:36:10,040
but because the organization never does anything decisive
1042
00:36:10,040 –> 00:36:11,080
with them.
1043
00:36:11,080 –> 00:36:12,400
This is the subtle corrosion.
1044
00:36:12,400 –> 00:36:14,160
The SOC starts to look busy
1045
00:36:14,160 –> 00:36:16,400
while the risk posture stays unchanged.
1046
00:36:16,400 –> 00:36:18,320
Leadership sees dashboards and summaries
1047
00:36:18,320 –> 00:36:20,200
and assumes governance is functioning.
1048
00:36:20,200 –> 00:36:22,880
In reality, the accountability pathway is missing.
1049
00:36:22,880 –> 00:36:24,400
The decision moment was never designed.
1050
00:36:24,400 –> 00:36:26,840
So when the same pattern repeats nobody can say
1051
00:36:26,840 –> 00:36:29,640
last time we decided X because of Y and here’s the record.
1052
00:36:29,640 –> 00:36:33,080
And this is where the line lands every single time.
1053
00:36:33,080 –> 00:36:35,080
Thinking without enforcement is fantasy.
1054
00:36:35,080 –> 00:36:37,520
If the triage summary doesn’t force a choice
1055
00:36:37,520 –> 00:36:39,400
and the choice doesn’t trigger consequence
1056
00:36:39,400 –> 00:36:41,000
then Copa-la didn’t make you safer.
1057
00:36:41,000 –> 00:36:43,800
It made you more articulate while you remained indecisive.
1058
00:36:43,800 –> 00:36:44,920
The hard truth is this.
1059
00:36:44,920 –> 00:36:46,680
Analysis without action pathways
1060
00:36:46,680 –> 00:36:48,360
becomes decorative intelligence.
1061
00:36:48,360 –> 00:36:49,800
It produces confidence theater.
1062
00:36:49,800 –> 00:36:51,640
It produces the feeling that the organization
1063
00:36:51,640 –> 00:36:53,640
is handling risk because it can describe risk
1064
00:36:53,640 –> 00:36:54,680
in nice paragraphs.
1065
00:36:54,680 –> 00:36:56,040
But security is not narrative.
1066
00:36:56,040 –> 00:36:57,320
Security is consequence.
1067
00:36:57,320 –> 00:36:59,240
So the proper handoff is brutally simple.
1068
00:36:59,240 –> 00:37:01,760
Copa-la proposes interpretations.
1069
00:37:01,760 –> 00:37:04,200
A human selects intent and severity
1070
00:37:04,200 –> 00:37:06,440
and becomes the named owner of that decision.
1071
00:37:06,440 –> 00:37:08,440
The action system enforces the response path
1072
00:37:08,440 –> 00:37:10,000
and records evidence that records
1073
00:37:10,000 –> 00:37:12,120
survives the next incident review.
1074
00:37:12,120 –> 00:37:13,840
It also survives leadership denial
1075
00:37:13,840 –> 00:37:16,600
because it removes the ability to pretend nobody decided.
1076
00:37:16,600 –> 00:37:18,600
The AI didn’t decide what mattered.
1077
00:37:18,600 –> 00:37:20,040
It decided what was possible.
1078
00:37:20,040 –> 00:37:21,800
The organization decided nothing.
1079
00:37:21,800 –> 00:37:23,400
And so nothing meaningful happened.
1080
00:37:23,400 –> 00:37:25,960
Scenario two, HR policy interpretation.
1081
00:37:25,960 –> 00:37:28,840
Security triage fails fast because pressure forces the cracks
1082
00:37:28,840 –> 00:37:30,800
to show HR policy fails differently.
1083
00:37:30,800 –> 00:37:33,800
It fails quietly, politely, and at scale.
1084
00:37:33,800 –> 00:37:35,560
Because HR policy is where organization
1085
00:37:35,560 –> 00:37:38,360
store ambiguity on purpose, exceptions, discretion,
1086
00:37:38,360 –> 00:37:41,360
manager judgment, union context, local law differences,
1087
00:37:41,360 –> 00:37:44,240
and the uncomfortable reality that fairness isn’t a formula.
1088
00:37:44,240 –> 00:37:47,200
So when you drop a cognitive system into that environment,
1089
00:37:47,200 –> 00:37:48,840
you don’t just get helpful drafts.
1090
00:37:48,840 –> 00:37:50,240
You get doctrine by accident.
1091
00:37:50,240 –> 00:37:51,960
Start again with the system of cognition.
1092
00:37:51,960 –> 00:37:55,120
Copa-la can read policy documents prior HR cases,
1093
00:37:55,120 –> 00:37:57,760
email threads, teams, chats, and whatever precedent
1094
00:37:57,760 –> 00:37:59,640
exists in tickets and knowledge articles.
1095
00:37:59,640 –> 00:38:01,240
It can summarize eligibility rules.
1096
00:38:01,240 –> 00:38:02,960
It can propose response language.
1097
00:38:02,960 –> 00:38:04,760
It can even generate a decision tree
1098
00:38:04,760 –> 00:38:06,440
that looks reasonable and calm.
1099
00:38:06,440 –> 00:38:09,240
This is the part leaders love because it feels like consistency.
1100
00:38:09,240 –> 00:38:10,280
It feels like scale.
1101
00:38:10,280 –> 00:38:13,840
It feels like standard answers, replacing messy human variation.
1102
00:38:13,840 –> 00:38:15,520
But the output is still cognition.
1103
00:38:15,520 –> 00:38:17,840
It is still a plausible interpretation of text.
1104
00:38:17,840 –> 00:38:20,800
The system of judgment is the manager, HR business partner,
1105
00:38:20,800 –> 00:38:22,600
or people leader who has to decide
1106
00:38:22,600 –> 00:38:25,040
what the organization is actually willing to stand behind.
1107
00:38:25,040 –> 00:38:27,800
That includes things policy text rarely captures,
1108
00:38:27,800 –> 00:38:31,240
context, intent, equity, cultural precedent,
1109
00:38:31,240 –> 00:38:32,600
and second order effects.
1110
00:38:32,600 –> 00:38:34,800
A classic example is exception handling,
1111
00:38:34,800 –> 00:38:36,520
an employee asks for a deviation.
1112
00:38:36,520 –> 00:38:38,960
Extended leave, remote work outside the policy,
1113
00:38:38,960 –> 00:38:41,280
and accommodation, a one off schedule change,
1114
00:38:41,280 –> 00:38:43,680
a benefit interpretation that doesn’t fit the template.
1115
00:38:43,680 –> 00:38:46,640
The AI will generate a neat answer because neatness is its job.
1116
00:38:46,640 –> 00:38:47,720
It will cite sections.
1117
00:38:47,720 –> 00:38:49,320
It will propose language that sounds
1118
00:38:49,320 –> 00:38:51,400
considerate while staying compliant.
1119
00:38:51,400 –> 00:38:52,960
And this is where the failure begins
1120
00:38:52,960 –> 00:38:54,720
because exceptions are not technical.
1121
00:38:54,720 –> 00:38:57,560
Exceptions are moral and organizational decisions.
1122
00:38:57,560 –> 00:38:59,800
They set precedent, they change expectations,
1123
00:38:59,800 –> 00:39:02,920
they alter trust, they become stories employees tell each other,
1124
00:39:02,920 –> 00:39:05,400
which is how culture gets enforced in the real world.
1125
00:39:05,400 –> 00:39:08,840
So the judgment moment is not what does the policy say.
1126
00:39:08,840 –> 00:39:11,880
The judgment moment is, do we apply the policy strictly here?
1127
00:39:11,880 –> 00:39:13,000
Do we allow an exception?
1128
00:39:13,000 –> 00:39:14,800
And if we do, what boundary are we setting?
1129
00:39:14,800 –> 00:39:16,360
So this doesn’t become the new rule.
1130
00:39:16,360 –> 00:39:18,200
If you don’t force that decision to be explicit,
1131
00:39:18,200 –> 00:39:19,600
co-pilot will produce an answer
1132
00:39:19,600 –> 00:39:21,360
that becomes policy by repetition.
1133
00:39:21,360 –> 00:39:22,480
Now the system of action.
1134
00:39:22,480 –> 00:39:25,160
In HR, action is not created ticket.
1135
00:39:25,160 –> 00:39:28,240
Action is, record the decision, apply downstream effects,
1136
00:39:28,240 –> 00:39:30,320
and make the exception visible to governance.
1137
00:39:30,320 –> 00:39:32,240
That means the workflow has to capture
1138
00:39:32,240 –> 00:39:34,040
who decided what rationale they used,
1139
00:39:34,040 –> 00:39:36,640
what policy they referenced, what exception they granted,
1140
00:39:36,640 –> 00:39:38,400
and what review is required.
1141
00:39:38,400 –> 00:39:41,000
It also needs to trigger whatever consequences follow.
1142
00:39:41,000 –> 00:39:43,640
Payroll changes, access adjustments, manager approvals,
1143
00:39:43,640 –> 00:39:46,440
legal review, or a case review board, if required.
1144
00:39:46,440 –> 00:39:48,920
This is where thinking without enforcement is fantasy
1145
00:39:48,920 –> 00:39:49,720
gets sharper.
1146
00:39:49,720 –> 00:39:51,640
If co-pilot drafts a perfect response,
1147
00:39:51,640 –> 00:39:53,560
but the organization doesn’t force the manager
1148
00:39:53,560 –> 00:39:55,160
to log the decision and own it,
1149
00:39:55,160 –> 00:39:57,520
then the only thing that scaled was deniability.
1150
00:39:57,520 –> 00:40:00,800
The manager can say, I followed what the system suggested.
1151
00:40:00,800 –> 00:40:02,800
HR can say, we didn’t approve that.
1152
00:40:02,800 –> 00:40:05,360
Legal can say, that’s not our interpretation.
1153
00:40:05,360 –> 00:40:07,200
And the employee hears one message.
1154
00:40:07,200 –> 00:40:09,720
The organization has no coherent owner for fairness.
1155
00:40:09,720 –> 00:40:10,960
That is not a people problem.
1156
00:40:10,960 –> 00:40:12,280
It is a system design problem.
1157
00:40:12,280 –> 00:40:14,320
Here’s the predictable failure mode.
1158
00:40:14,320 –> 00:40:15,760
A manager pings co-pilot.
1159
00:40:15,760 –> 00:40:18,640
Employee is requesting X based on policy Y.
1160
00:40:18,640 –> 00:40:19,880
What should I say?
1161
00:40:19,880 –> 00:40:21,600
Co-pilot produces a clean answer.
1162
00:40:21,600 –> 00:40:22,800
The manager forwards it.
1163
00:40:22,800 –> 00:40:24,920
The employee accepts it as an official position.
1164
00:40:24,920 –> 00:40:27,560
A week later, another employee requests the same thing.
1165
00:40:27,560 –> 00:40:30,440
Someone else asks co-pilot, the wording is slightly different,
1166
00:40:30,440 –> 00:40:31,880
but the intent is similar.
1167
00:40:31,880 –> 00:40:33,520
Now you have inconsistency.
1168
00:40:33,520 –> 00:40:36,040
Or worse, you have a de facto rule that was never approved.
1169
00:40:36,040 –> 00:40:37,080
Then it escalates.
1170
00:40:37,080 –> 00:40:38,560
An employee disputes treatment.
1171
00:40:38,560 –> 00:40:39,680
HR investigates.
1172
00:40:39,680 –> 00:40:42,040
They ask, who approved this exception?
1173
00:40:42,040 –> 00:40:42,640
Nobody knows.
1174
00:40:42,640 –> 00:40:43,320
There’s no record.
1175
00:40:43,320 –> 00:40:44,000
There was a chat.
1176
00:40:44,000 –> 00:40:44,680
There was an email.
1177
00:40:44,680 –> 00:40:46,440
There was a paragraph that sounded official.
1178
00:40:46,440 –> 00:40:47,640
But there was no decision log.
1179
00:40:47,640 –> 00:40:49,640
No owner, no rationale, no review.
1180
00:40:49,920 –> 00:40:53,360
So the system said so, which is the most corrosive phrase in any organization
1181
00:40:53,360 –> 00:40:57,880
because it converts leadership into bureaucracy and bureaucracy into moral abdication.
1182
00:40:57,880 –> 00:41:01,320
The uncomfortable point is that AI didn’t create the ambiguity.
1183
00:41:01,320 –> 00:41:02,920
HR policy already contains it.
1184
00:41:02,920 –> 00:41:08,400
AI just makes it fast to manufacture official sounding interpretations of that ambiguity.
1185
00:41:08,400 –> 00:41:12,240
So the operational fix isn’t tell managers not to use co-pilot.
1186
00:41:12,240 –> 00:41:13,200
That’s fantasy too.
1187
00:41:13,200 –> 00:41:15,240
They will use it because speed wins.
1188
00:41:15,240 –> 00:41:16,640
The fix is to design the hand of.
1189
00:41:16,640 –> 00:41:20,040
Co-pilot can propose language and surface president patterns.
1190
00:41:20,040 –> 00:41:21,880
The manager must select intent.
1191
00:41:21,880 –> 00:41:25,600
Strict policy application approved exception or escalation required.
1192
00:41:25,600 –> 00:41:27,480
Then the workflow enforces consequence.
1193
00:41:27,480 –> 00:41:31,560
It records the decision, triggers approvals and creates an orderable trail.
1194
00:41:31,560 –> 00:41:35,440
And the line that matters here is the same one as security, just quieter.
1195
00:41:35,440 –> 00:41:37,360
The AI didn’t decide what was fair.
1196
00:41:37,360 –> 00:41:38,560
It decided what was possible.
1197
00:41:38,560 –> 00:41:40,440
The organization decided nothing.
1198
00:41:40,440 –> 00:41:43,000
And then everyone acted like a decision happened anyway.
1199
00:41:43,000 –> 00:41:45,480
Scenario 3, IT change management.
1200
00:41:45,480 –> 00:41:49,640
IT change management is where outsourced judgment stops being a philosophical concern
1201
00:41:49,640 –> 00:41:51,960
and becomes an outage with a timestamp.
1202
00:41:51,960 –> 00:41:56,520
And it’s where AI makes the failure cleaner, faster and harder to argue with because the
1203
00:41:56,520 –> 00:41:59,920
paperwork looks better right up until production breaks.
1204
00:41:59,920 –> 00:42:01,840
Start with the system of cognition.
1205
00:42:01,840 –> 00:42:04,600
Co-pilot can draft what humans hate writing.
1206
00:42:04,600 –> 00:42:08,920
Impact analysis, dependency lists, comms plans, rollback steps, stakeholder summaries and
1207
00:42:08,920 –> 00:42:11,240
the ritual language of risk and mitigation.
1208
00:42:11,240 –> 00:42:15,360
It can scan related tickets, past incidents, meeting notes and the change description itself
1209
00:42:15,360 –> 00:42:19,160
and produce something that sounds like a mature engineering organization.
1210
00:42:19,160 –> 00:42:20,240
This is the seductive part.
1211
00:42:20,240 –> 00:42:21,440
The artifact looks complete.
1212
00:42:21,440 –> 00:42:22,600
The tone is confident.
1213
00:42:22,600 –> 00:42:24,400
The structure matches what cab expects.
1214
00:42:24,400 –> 00:42:26,160
It reads like someone did the work.
1215
00:42:26,160 –> 00:42:27,720
But cognition is still not judgment.
1216
00:42:27,720 –> 00:42:30,400
It is still a proposal generator wearing a suit.
1217
00:42:30,400 –> 00:42:35,240
The system of judgment in change management is the part most organizations pretend is automated
1218
00:42:35,240 –> 00:42:36,240
already.
1219
00:42:36,240 –> 00:42:37,520
Acceptable blast radius.
1220
00:42:37,520 –> 00:42:39,400
Because blast radius is not a technical measurement.
1221
00:42:39,400 –> 00:42:43,560
It is an organizational decision about what the business is willing to lose today.
1222
00:42:43,560 –> 00:42:48,080
Reliability, performance, customer trust, operational focus so a change can happen.
1223
00:42:48,080 –> 00:42:49,760
The AI can list potential impacts.
1224
00:42:49,760 –> 00:42:51,280
It can propose mitigation steps.
1225
00:42:51,280 –> 00:42:53,040
It can recommend a maintenance window.
1226
00:42:53,040 –> 00:42:56,800
What it cannot do is decide which failure mode is tolerable, which stakeholder gets to
1227
00:42:56,800 –> 00:43:00,200
be angry and which risk is worth accepting to land the change.
1228
00:43:00,200 –> 00:43:04,640
That’s the architects job or the change owners job or whoever will be sitting in the post-incident
1229
00:43:04,640 –> 00:43:07,760
review explaining why this was reasonable.
1230
00:43:07,760 –> 00:43:12,200
So the judgment moment is where someone must say explicitly, we are changing X.
1231
00:43:12,200 –> 00:43:16,880
We believe the blast radius is Y, the rollback posture is Z and we accept the residual
1232
00:43:16,880 –> 00:43:17,880
risk.
1233
00:43:17,880 –> 00:43:19,520
Without that sentence you don’t have a change.
1234
00:43:19,520 –> 00:43:22,480
You have a document now the system of action is where this becomes real.
1235
00:43:22,480 –> 00:43:26,120
A change workflow should enforce the consequences of that judgment.
1236
00:43:26,120 –> 00:43:31,680
Approvals, blackout windows, evidence attachments, pre-change checks, implementation steps and
1237
00:43:31,680 –> 00:43:36,320
mandatory post implementation review if certain conditions are met.
1238
00:43:36,320 –> 00:43:39,320
Action systems exist because humans under pressure improvise.
1239
00:43:39,320 –> 00:43:40,320
They skip steps.
1240
00:43:40,320 –> 00:43:41,720
They rely on memory.
1241
00:43:41,720 –> 00:43:43,440
They just do it real quick.
1242
00:43:43,440 –> 00:43:46,600
Workflow exists to stop improvisation from becoming policy.
1243
00:43:46,600 –> 00:43:48,560
Here’s the failure mode AI introduces.
1244
00:43:48,560 –> 00:43:50,600
Copilot produces a polished impact analysis.
1245
00:43:50,600 –> 00:43:52,120
It proposes a rollback plan.
1246
00:43:52,120 –> 00:43:53,600
It suggests comms language.
1247
00:43:53,600 –> 00:43:57,880
The change record looks good enough that reviewers assume the hard thinking already happened.
1248
00:43:57,880 –> 00:43:59,440
The cab meeting moves faster.
1249
00:43:59,440 –> 00:44:04,000
Approvals happen by momentum and the organization confuses completeness with correctness.
1250
00:44:04,000 –> 00:44:05,480
Then the change lands.
1251
00:44:05,480 –> 00:44:07,120
Something unexpected happens.
1252
00:44:07,120 –> 00:44:11,280
Not necessarily a hallucination problem, a context problem, a hidden dependency, a permission
1253
00:44:11,280 –> 00:44:15,920
edge case, a replication lag, an integration that was out of scope in the mind of the person
1254
00:44:15,920 –> 00:44:19,240
who requested the change but very much in scope in production reality.
1255
00:44:19,240 –> 00:44:22,920
Now you have an outage and you start asking the only question that matters.
1256
00:44:22,920 –> 00:44:24,880
Who decided this was acceptable?
1257
00:44:24,880 –> 00:44:28,720
In too many organizations the answer is a shrug disguised as process.
1258
00:44:28,720 –> 00:44:32,960
The change record has text, it has risk language, it has mitigation bullets, it even has a rollback
1259
00:44:32,960 –> 00:44:37,400
section, but nobody can point to the judgment moment where a human accepted the blast radius
1260
00:44:37,400 –> 00:44:38,600
and owned the trade-off.
1261
00:44:38,600 –> 00:44:41,920
Because the artifact got treated as the decision and that’s outsourced judgment in its
1262
00:44:41,920 –> 00:44:46,040
purest form, the document looks like governance, so governance is assumed to exist.
1263
00:44:46,040 –> 00:44:49,400
This is why looks fine is one of the most expensive phrases in IT.
1264
00:44:49,400 –> 00:44:52,280
AI increases the number of things that look fine.
1265
00:44:52,280 –> 00:44:53,280
That’s the problem.
1266
00:44:53,280 –> 00:44:56,800
The correct handoff is not copilot writes the change record and we move on.
1267
00:44:56,800 –> 00:45:00,880
The correct handoff is copilot proposes the thinking artifact, a human selects intent
1268
00:45:00,880 –> 00:45:05,800
and declares risk posture and the workflow enforces that posture with irreversible constraints.
1269
00:45:05,800 –> 00:45:10,920
For example, if the human selects high blast radius, the action system should require senior
1270
00:45:10,920 –> 00:45:15,760
approval, enforce a tighter window, require a tested rollback and force a post implementation
1271
00:45:15,760 –> 00:45:17,360
review.
1272
00:45:17,360 –> 00:45:22,120
If the human selects low blast radius, the workflow can be lighter, but it should still
1273
00:45:22,120 –> 00:45:24,160
record who made that classification.
1274
00:45:24,160 –> 00:45:28,240
That is how you turn change management from paper compliance into operational gravity.
1275
00:45:28,240 –> 00:45:32,120
And if you want to diagnostic that cuts through the theatre, ask this, can you reconstruct
1276
00:45:32,120 –> 00:45:36,040
the decision owner and their rationale for every outage causing change in the last year?
1277
00:45:36,040 –> 00:45:37,880
If you can’t, you don’t have change management.
1278
00:45:37,880 –> 00:45:41,840
You have a calendar, so the uncomfortable point stands even here, the AI didn’t cause
1279
00:45:41,840 –> 00:45:43,000
the outage.
1280
00:45:43,000 –> 00:45:44,240
Unowned judgment did.
1281
00:45:44,240 –> 00:45:47,880
The AI generated what was possible, the organization never forced a human to decide what
1282
00:45:47,880 –> 00:45:52,680
was acceptable and the system enforced nothing until production enforced it for you.
1283
00:45:52,680 –> 00:45:55,080
The skills AI makes more valuable.
1284
00:45:55,080 –> 00:45:56,080
Judgment
1285
00:45:56,080 –> 00:45:58,880
After the scenarios, the pattern should feel obvious.
1286
00:45:58,880 –> 00:46:00,680
AI didn’t break the organization.
1287
00:46:00,680 –> 00:46:04,880
It revealed what was missing and what’s missing almost everywhere is judgment.
1288
00:46:04,880 –> 00:46:08,040
Not intelligence, not information, not output capacity.
1289
00:46:08,040 –> 00:46:12,480
Judgment, the ability to choose under uncertainty, to name trade-offs and to own consequences
1290
00:46:12,480 –> 00:46:13,480
in public.
1291
00:46:13,480 –> 00:46:15,680
AI increases the volume of plausible options.
1292
00:46:15,680 –> 00:46:19,280
That means it increases the volume of uncertainty you have to arbitrate.
1293
00:46:19,280 –> 00:46:23,040
If your organization was already weak at arbitration, AI doesn’t help.
1294
00:46:23,040 –> 00:46:24,480
It just accelerates the collapse.
1295
00:46:24,480 –> 00:46:25,680
This is the uncomfortable truth.
1296
00:46:25,680 –> 00:46:27,680
AI doesn’t replace decision making.
1297
00:46:27,680 –> 00:46:29,800
It industrializes decision pressure.
1298
00:46:29,800 –> 00:46:31,480
Before the bottleneck was production.
1299
00:46:31,480 –> 00:46:35,360
Writing the draft, building the deck, pulling the data, creating the status report, AI makes
1300
00:46:35,360 –> 00:46:36,360
those cheap.
1301
00:46:36,360 –> 00:46:40,160
Which means the scarce resource becomes the part nobody can automate.
1302
00:46:40,160 –> 00:46:44,640
Deciding what matters, what is acceptable, and what you are willing to be accountable for.
1303
00:46:44,640 –> 00:46:49,400
And when judgment becomes the scarce resource, organizations do what they always do with scarcity.
1304
00:46:49,400 –> 00:46:50,400
They try to outsource it.
1305
00:46:50,400 –> 00:46:55,120
They treat the AI’s output as if it is a decision because the output looks like a decision.
1306
00:46:55,120 –> 00:46:59,040
It has structure, it has confidence, it has bullet points, it has a neat conclusion.
1307
00:46:59,040 –> 00:47:01,480
In a culture trained to worship throughput, that’s enough.
1308
00:47:01,480 –> 00:47:03,400
The artifact gets mistaken for ownership.
1309
00:47:03,400 –> 00:47:04,720
But ownership is not a format.
1310
00:47:04,720 –> 00:47:05,840
Ownership is a person.
1311
00:47:05,840 –> 00:47:07,440
Judgment is not being smart.
1312
00:47:07,440 –> 00:47:10,120
It’s arbitration under constraints you didn’t choose.
1313
00:47:10,120 –> 00:47:11,120
Time.
1314
00:47:11,120 –> 00:47:12,120
Politics.
1315
00:47:12,120 –> 00:47:13,120
Risk.
1316
00:47:13,120 –> 00:47:14,120
Unclear data.
1317
00:47:14,120 –> 00:47:15,120
Conflicting incentives.
1318
00:47:15,120 –> 00:47:18,320
The human job is to take that mess and still pick a direction with a rationale that
1319
00:47:18,320 –> 00:47:20,560
survives contact with reality.
1320
00:47:20,560 –> 00:47:24,240
AI cannot do that job because AI cannot pay the cost of being wrong.
1321
00:47:24,240 –> 00:47:25,240
So here’s the shift.
1322
00:47:25,240 –> 00:47:29,000
In an AI rich environment, judgment becomes the primary differentiator between teams that
1323
00:47:29,000 –> 00:47:31,800
scale capability and teams that scale confusion.
1324
00:47:31,800 –> 00:47:34,760
Strong judgment looks boring in the way auditors love.
1325
00:47:34,760 –> 00:47:35,760
It names assumptions.
1326
00:47:35,760 –> 00:47:37,600
It identifies what’s unknown.
1327
00:47:37,600 –> 00:47:39,520
It states what would change the decision.
1328
00:47:39,520 –> 00:47:40,720
It documents the trade off.
1329
00:47:40,720 –> 00:47:43,240
It assigns a decision owner who is not a committee.
1330
00:47:43,240 –> 00:47:45,320
It doesn’t hide behind wheel monitor.
1331
00:47:45,320 –> 00:47:48,520
It defines what monitoring means and what triggers action.
1332
00:47:48,520 –> 00:47:50,480
Weak judgment in contrast looks productive.
1333
00:47:50,480 –> 00:47:51,480
It ships more drafts.
1334
00:47:51,480 –> 00:47:52,640
It generates more options.
1335
00:47:52,640 –> 00:47:54,440
It produces more analysis artifacts.
1336
00:47:54,440 –> 00:47:55,440
It uses more tools.
1337
00:47:55,440 –> 00:47:56,520
It schedules more meetings.
1338
00:47:56,520 –> 00:47:58,040
It creates more alignment.
1339
00:47:58,040 –> 00:48:01,880
It creates activity that feels like work while avoiding the moment where someone says,
1340
00:48:01,880 –> 00:48:04,960
“I choose this and I accept the consequences.”
1341
00:48:04,960 –> 00:48:07,200
That is why AI makes weak leadership worse.
1342
00:48:07,200 –> 00:48:10,600
It gives weak leadership more ways to avoid choosing while still looking busy.
1343
00:48:10,600 –> 00:48:14,840
Now take the executive lens because executives are the ones most tempted to outsource judgment.
1344
00:48:14,840 –> 00:48:15,840
They live in abstraction.
1345
00:48:15,840 –> 00:48:16,840
They receive summaries.
1346
00:48:16,840 –> 00:48:19,440
They make decisions based on compressed narratives.
1347
00:48:19,440 –> 00:48:22,760
So an AI generated summary feels like a perfect fit.
1348
00:48:22,760 –> 00:48:25,920
Better compression, more coverage, fewer gaps.
1349
00:48:25,920 –> 00:48:27,360
Except it doesn’t reduce gaps.
1350
00:48:27,360 –> 00:48:28,360
It hides them.
1351
00:48:28,360 –> 00:48:31,440
The KPI for leadership in an AI environment can’t be throughput.
1352
00:48:31,440 –> 00:48:32,440
Thruput will always go up.
1353
00:48:32,440 –> 00:48:36,640
The KPI has to be decision quality and decision quality is visible in outcomes.
1354
00:48:36,640 –> 00:48:39,320
And in the trail of reasoning you can defend later.
1355
00:48:39,320 –> 00:48:42,840
The hard part is that judgment discipline requires admitting uncertainty.
1356
00:48:42,840 –> 00:48:46,440
Publicly, in writing, with your name attached, most organizations punish that.
1357
00:48:46,440 –> 00:48:47,720
They reward certainty theatre.
1358
00:48:47,720 –> 00:48:52,200
They reward confidence even when it’s fake because fake confidence moves meetings forward.
1359
00:48:52,200 –> 00:48:56,360
AI produces fake confidence on demand so the temptation becomes structural.
1360
00:48:56,360 –> 00:48:59,440
This is why outsource judgment is the real failure mode.
1361
00:48:59,440 –> 00:49:01,000
Hallucinations are easy to blame.
1362
00:49:01,000 –> 00:49:03,800
Decision avoidance is harder because it implicates everyone.
1363
00:49:03,800 –> 00:49:07,720
So if you want a practical definition of judgment in this model, here it is.
1364
00:49:07,720 –> 00:49:10,440
Judgment is selecting a trade-off and making it explicit.
1365
00:49:10,440 –> 00:49:12,920
Everything else is commentary.
1366
00:49:12,920 –> 00:49:17,120
When a security lead classifies an incident as low severity, they are choosing a trade-off.
1367
00:49:17,120 –> 00:49:19,400
Test disruption now, more risk later.
1368
00:49:19,400 –> 00:49:22,840
When a manager grants an HR exception, they are choosing a trade-off.
1369
00:49:22,840 –> 00:49:25,360
Compassion now, precedent later.
1370
00:49:25,360 –> 00:49:28,240
When an architect approves a change, they are choosing a trade-off.
1371
00:49:28,240 –> 00:49:30,600
Progress now, blast radius later.
1372
00:49:30,600 –> 00:49:33,880
When finance holds forecast guidance, they are choosing a trade-off.
1373
00:49:33,880 –> 00:49:35,880
Stability now, credibility later.
1374
00:49:35,880 –> 00:49:37,440
AI can list those trade-offs.
1375
00:49:37,440 –> 00:49:40,680
It cannot choose one that aligns with your values and constraints.
1376
00:49:40,680 –> 00:49:41,920
That choice is leadership.
1377
00:49:41,920 –> 00:49:45,600
And the organization needs to start treating that as the work, not as an overhead.
1378
00:49:45,600 –> 00:49:46,800
So the skill stack shifts.
1379
00:49:46,800 –> 00:49:50,600
The person who can arbitrate under uncertainty becomes more valuable than the person who
1380
00:49:50,600 –> 00:49:52,080
can draft under pressure.
1381
00:49:52,080 –> 00:49:56,160
The team that can frame intent and enforce ownership will outperform the team that can
1382
00:49:56,160 –> 00:49:58,080
generate 100 plausible paths.
1383
00:49:58,080 –> 00:50:01,800
The enterprise that builds judgment moments into workflows will scale trust.
1384
00:50:01,800 –> 00:50:03,880
The enterprise that doesn’t will scale deniability.
1385
00:50:03,880 –> 00:50:05,200
And that’s the final irony.
1386
00:50:05,200 –> 00:50:06,760
AI will solve this productivity.
1387
00:50:06,760 –> 00:50:10,800
What it actually does is expose whether your organization is capable of responsible choice.
1388
00:50:10,800 –> 00:50:12,880
If you are, AI amplifies clarity.
1389
00:50:12,880 –> 00:50:15,040
If you aren’t, it amplifies entropy.
1390
00:50:15,040 –> 00:50:18,000
The skills AI makes more valuable problem framing.
1391
00:50:18,000 –> 00:50:22,040
If judgment is the scarce resource, problem framing is the control surface that determines
1392
00:50:22,040 –> 00:50:24,360
whether judgment even has a chance.
1393
00:50:24,360 –> 00:50:26,840
Most organizations treat framing like preamble.
1394
00:50:26,840 –> 00:50:30,760
A few sentences at the top of a dock, then they rush to the real work.
1395
00:50:30,760 –> 00:50:32,640
AI punishes that habit immediately.
1396
00:50:32,640 –> 00:50:34,840
Because if you don’t frame the problem, the model will.
1397
00:50:34,840 –> 00:50:38,680
And it will frame it using whatever scraps of context it can infer.
1398
00:50:38,680 –> 00:50:43,400
Your wording, your org’s historical artifacts, and the ambient assumptions embedded in your
1399
00:50:43,400 –> 00:50:44,400
data.
1400
00:50:44,400 –> 00:50:47,000
And the collaboration that’s abdication.
1401
00:50:47,000 –> 00:50:51,560
Problem framing is the act of declaring intent, constraints, and success conditions before
1402
00:50:51,560 –> 00:50:53,160
you generate options.
1403
00:50:53,160 –> 00:50:57,440
It is the difference between, help me write something and help me decide something.
1404
00:50:57,440 –> 00:51:00,080
AI is excellent at generating possibilities.
1405
00:51:00,080 –> 00:51:04,680
It is useless at deciding relevance unless you tell it what relevance means here.
1406
00:51:04,680 –> 00:51:07,240
This is why AI often produces polished nonsense.
1407
00:51:07,240 –> 00:51:11,320
Not because the model is broken, but because the question was, a bad frame produces outputs
1408
00:51:11,320 –> 00:51:14,200
that are internally coherent and externally wrong.
1409
00:51:14,200 –> 00:51:17,920
They sound reasonable, but they solve a different problem than the one you actually have.
1410
00:51:17,920 –> 00:51:19,600
They optimize for the wrong stakeholder.
1411
00:51:19,600 –> 00:51:20,960
They assume the wrong constraints.
1412
00:51:20,960 –> 00:51:22,800
They move fast in the wrong direction.
1413
00:51:22,800 –> 00:51:25,960
And the organization accepts them because they read well and nobody wants to admit the
1414
00:51:25,960 –> 00:51:27,280
initial ask was vague.
1415
00:51:27,280 –> 00:51:28,680
That’s the core failure.
1416
00:51:28,680 –> 00:51:30,960
Vague intent scales into confident artifacts.
1417
00:51:30,960 –> 00:51:34,400
Good framing fixes that by doing three things up front.
1418
00:51:34,400 –> 00:51:35,960
First it declares the decision type.
1419
00:51:35,960 –> 00:51:40,200
Is this a recommendation, a policy interpretation, a communication draft, an escalation, or an
1420
00:51:40,200 –> 00:51:41,200
irreversible action?
1421
00:51:41,200 –> 00:51:44,280
If you can’t name the decision type, you can’t govern the output.
1422
00:51:44,280 –> 00:51:46,440
You can’t decide what validation is required.
1423
00:51:46,440 –> 00:51:48,040
You can’t decide who needs to own it.
1424
00:51:48,040 –> 00:51:49,520
You’re just generating text.
1425
00:51:49,520 –> 00:51:51,320
Second it declares constraints.
1426
00:51:51,320 –> 00:51:52,960
Not aspirational constraints.
1427
00:51:52,960 –> 00:51:54,240
Real constraints.
1428
00:51:54,240 –> 00:51:55,400
What must be true?
1429
00:51:55,400 –> 00:51:56,400
What must not happen?
1430
00:51:56,400 –> 00:51:57,880
What sources are authoritative?
1431
00:51:57,880 –> 00:51:59,160
What time window matters?
1432
00:51:59,160 –> 00:52:00,640
What risk threshold applies?
1433
00:52:00,640 –> 00:52:02,120
What you are not allowed to claim?
1434
00:52:02,120 –> 00:52:03,320
Constraints are the rails.
1435
00:52:03,320 –> 00:52:05,280
Without rails you get plausible drift.
1436
00:52:05,280 –> 00:52:07,040
Third it declares success conditions.
1437
00:52:07,040 –> 00:52:09,640
What will a good output enable a human to do?
1438
00:52:09,640 –> 00:52:14,080
Decide severity, notify stakeholders approve a change, reject an exception.
1439
00:52:14,080 –> 00:52:16,720
If the output doesn’t change an action, it’s entertainment.
1440
00:52:16,720 –> 00:52:18,400
And here’s the uncomfortable part.
1441
00:52:18,400 –> 00:52:20,200
Framing is work that can’t be outsourced.
1442
00:52:20,200 –> 00:52:22,720
It’s upstream judgment in its purest form.
1443
00:52:22,720 –> 00:52:26,400
It forces leaders to say what they want, what they believe, and what they’re willing to
1444
00:52:26,400 –> 00:52:27,400
trade.
1445
00:52:27,400 –> 00:52:28,400
That’s why most leaders skip it.
1446
00:52:28,400 –> 00:52:33,000
They prefer to ask for a strategy or a plan and let the AI fill the void.
1447
00:52:33,000 –> 00:52:37,280
Then they act surprised when the plan is generic or optimistic or politically incoherent.
1448
00:52:37,280 –> 00:52:41,560
The AI didn’t fail, the leader refused to specify what reality they are operating in.
1449
00:52:41,560 –> 00:52:45,200
So the discipline has to be explicit and it has to be small enough that people will actually
1450
00:52:45,200 –> 00:52:46,200
do it.
1451
00:52:46,200 –> 00:52:48,480
Here’s the simplest habit that changes everything.
1452
00:52:48,480 –> 00:52:51,640
Before you share any AI output write a one sentence frame.
1453
00:52:51,640 –> 00:52:53,800
One sentence, not a template, not a workshop.
1454
00:52:53,800 –> 00:52:57,640
A sentence that names intent, constraints, and the next action.
1455
00:52:57,640 –> 00:52:58,640
Examples sound like this.
1456
00:52:58,640 –> 00:52:59,640
Draft for discussion.
1457
00:52:59,640 –> 00:53:04,000
Propose three response options to this incident summary, assuming we prioritize containment
1458
00:53:04,000 –> 00:53:07,160
over uptime and we must preserve ordered evidence.
1459
00:53:07,160 –> 00:53:08,160
Enterpretation.
1460
00:53:08,160 –> 00:53:13,160
Summarize how policy X applies to this request and flag where manager discretion or escalation
1461
00:53:13,160 –> 00:53:16,360
is required without implying an approval.
1462
00:53:16,360 –> 00:53:17,360
Change assessment.
1463
00:53:17,360 –> 00:53:21,400
List dependencies and rollback steps for this deployment, assuming a two hour window and
1464
00:53:21,400 –> 00:53:23,440
zero tolerance for data loss.
1465
00:53:23,440 –> 00:53:24,440
Notice what that does.
1466
00:53:24,440 –> 00:53:26,400
It forces the human to state the trade off.
1467
00:53:26,400 –> 00:53:28,120
It forces constraints into the open.
1468
00:53:28,120 –> 00:53:31,640
It makes it harder to forward the output as if it is a finished decision.
1469
00:53:31,640 –> 00:53:35,280
It reattaches cognition to intent and it also makes the AI better.
1470
00:53:35,280 –> 00:53:38,680
Not because you found a better prompt, because you finally told the system what game you’re
1471
00:53:38,680 –> 00:53:39,680
playing.
1472
00:53:39,680 –> 00:53:42,680
This is the bridge between the prompt obsession chapter and reality.
1473
00:53:42,680 –> 00:53:44,600
Prompts are downstream, framing is upstream.
1474
00:53:44,600 –> 00:53:47,560
If the upstream is incoherent, downstream effort is just trash.
1475
00:53:47,560 –> 00:53:52,720
In other words, good framing makes fewer prompts necessary, because it collapses ambiguity
1476
00:53:52,720 –> 00:53:54,240
before it becomes output.
1477
00:53:54,240 –> 00:53:58,320
And framing is also how you stop the social failure mode where answer-shaped text becomes
1478
00:53:58,320 –> 00:53:59,560
truth by repetition.
1479
00:53:59,560 –> 00:54:03,720
When the first line of an email says draft for discussion, the organization has a chance
1480
00:54:03,720 –> 00:54:05,160
to treat it as a draft.
1481
00:54:05,160 –> 00:54:07,720
And it says nothing, the organization treats it as policy.
1482
00:54:07,720 –> 00:54:11,720
So if you want the high leverage mental shift, stop asking AI for answers and stop asking
1483
00:54:11,720 –> 00:54:13,360
your people for prompts.
1484
00:54:13,360 –> 00:54:14,680
Ask for frames.
1485
00:54:14,680 –> 00:54:17,680
Because the frame determines what the organization is actually doing.
1486
00:54:17,680 –> 00:54:20,080
Thinking, deciding, or just producing language.
1487
00:54:20,080 –> 00:54:23,280
And if you can’t tell which one it is, the organization can’t either.
1488
00:54:23,280 –> 00:54:26,880
The skills AI makes more valuable, context and ethics.
1489
00:54:26,880 –> 00:54:29,080
Problem framing gets you into the right room.
1490
00:54:29,080 –> 00:54:31,880
Context tells you what not to say once you’re in it.
1491
00:54:31,880 –> 00:54:36,000
And AI is terrible at that because context is mostly made of things humans don’t write
1492
00:54:36,000 –> 00:54:37,000
down.
1493
00:54:37,000 –> 00:54:42,120
History, power, timing, reputational risk and the quiet rules about what the organization
1494
00:54:42,120 –> 00:54:43,600
is willing to admit.
1495
00:54:43,600 –> 00:54:46,840
Copilot can summarize what happened, it can infer what might be happening.
1496
00:54:46,840 –> 00:54:50,600
It can propose what could be said, it cannot reliably know what this organization is
1497
00:54:50,600 –> 00:54:54,480
allowed to say to whom, right now, with these stakeholders watching.
1498
00:54:54,480 –> 00:54:55,480
That’s context.
1499
00:54:55,480 –> 00:54:59,440
It’s local, it’s situational, it’s political, it’s often ethical, and it’s where most
1500
00:54:59,440 –> 00:55:01,200
real world damage happens.
1501
00:55:01,200 –> 00:55:03,840
This is why just draft the email is not a safe ask.
1502
00:55:03,840 –> 00:55:08,600
The email isn’t just words, it’s a commitment, it creates expectations, it becomes evidence,
1503
00:55:08,600 –> 00:55:11,240
it shapes how people interpret intent.
1504
00:55:11,240 –> 00:55:15,280
And AI will happily draft something that is technically well written and strategically
1505
00:55:15,280 –> 00:55:18,800
catastrophic because it doesn’t feel the cost of being wrong.
1506
00:55:18,800 –> 00:55:22,800
Context awareness is the human skill of mapping an output to the actual environment.
1507
00:55:22,800 –> 00:55:23,960
Who is impacted?
1508
00:55:23,960 –> 00:55:24,960
What is at stake?
1509
00:55:24,960 –> 00:55:28,720
What is irreversible and what will be misunderstood on first read?
1510
00:55:28,720 –> 00:55:33,600
Most organizations don’t teach this explicitly because they assume seniority equals context.
1511
00:55:33,600 –> 00:55:35,800
It doesn’t, seniority often equals abstraction.
1512
00:55:35,800 –> 00:55:40,360
AI makes that worse by making it easy to operate at the level of summary without ever touching
1513
00:55:40,360 –> 00:55:41,360
reality.
1514
00:55:41,360 –> 00:55:45,680
So you need a discipline before you act on AI output, you ask a context question that the
1515
00:55:45,680 –> 00:55:47,120
model can’t answer for you.
1516
00:55:47,120 –> 00:55:49,520
What is the consequence of being wrong here?
1517
00:55:49,520 –> 00:55:51,960
Who gets harmed if we sound confident and we’re wrong?
1518
00:55:51,960 –> 00:55:54,640
What downstream systems will treat this as authoritative?
1519
00:55:54,640 –> 00:55:58,240
What parts of this statement are unverifiable even if they sound reasonable?
1520
00:55:58,240 –> 00:55:59,440
It’s not paranoia.
1521
00:55:59,440 –> 00:56:00,760
That’s governance of meaning.
1522
00:56:00,760 –> 00:56:06,200
Now add ethics because people like to pretend ethics is optional until it becomes a headline.
1523
00:56:06,200 –> 00:56:08,440
Ethical reasoning is not a compliance module.
1524
00:56:08,440 –> 00:56:12,400
It’s the ability to notice when a decision is being disguised as a suggestion, when an
1525
00:56:12,400 –> 00:56:16,640
exception is being normalized or when a probabilistic summary is being treated as truth because
1526
00:56:16,640 –> 00:56:18,040
it feels convenient.
1527
00:56:18,040 –> 00:56:20,200
The dangerous part of AI isn’t that it lies.
1528
00:56:20,200 –> 00:56:23,720
The dangerous part is that it speaks fluently in the language of legitimacy.
1529
00:56:23,720 –> 00:56:26,240
If it drafts an HR response, it sounds like HR.
1530
00:56:26,240 –> 00:56:28,600
If it drafts a security update, it sounds like security.
1531
00:56:28,600 –> 00:56:31,440
If it drafts an executive statement, it sounds like leadership.
1532
00:56:31,440 –> 00:56:34,200
Tone becomes camouflage.
1533
00:56:34,200 –> 00:56:36,240
That’s how bias and unfairness scale.
1534
00:56:36,240 –> 00:56:39,840
Not through obvious malice but through plausible defaults that nobody challenges because they
1535
00:56:39,840 –> 00:56:41,040
read well.
1536
00:56:41,040 –> 00:56:43,280
And bias is rarely just demographic.
1537
00:56:43,280 –> 00:56:45,960
Organizational bias shows up as invisible priorities.
1538
00:56:45,960 –> 00:56:50,600
Protect the schedule, protect the narrative, protect the executive, minimize escalation, avoid
1539
00:56:50,600 –> 00:56:52,120
admitting uncertainty.
1540
00:56:52,120 –> 00:56:56,120
AI absorbs those priorities from the artifacts you already produced.
1541
00:56:56,120 –> 00:56:59,080
And it amplifies them because that’s what Pat and synthesis does.
1542
00:56:59,080 –> 00:57:01,360
So the ethical posture has to be explicit.
1543
00:57:01,360 –> 00:57:02,360
Humans own outcomes.
1544
00:57:02,360 –> 00:57:03,360
Tools do not.
1545
00:57:03,360 –> 00:57:05,360
Not in principle, in practice.
1546
00:57:05,360 –> 00:57:09,440
That means you can’t let the system set so become the justification for a decision that
1547
00:57:09,440 –> 00:57:12,880
affects someone’s access, pay, job, health or reputation.
1548
00:57:12,880 –> 00:57:16,440
If your organization accepts that excuse, you’ve built a moral escape hatch.
1549
00:57:16,440 –> 00:57:19,120
And once people have an escape hatch, responsibility diffuses.
1550
00:57:19,120 –> 00:57:20,520
Here’s the operational test.
1551
00:57:20,520 –> 00:57:23,160
Can a human defend the decision without mentioning the AI?
1552
00:57:23,160 –> 00:57:25,280
If the answer is no, you didn’t make a decision.
1553
00:57:25,280 –> 00:57:29,480
You delegated one and you delegated it to a system that cannot be accountable.
1554
00:57:29,480 –> 00:57:33,200
This also ties back to the triad because ethics lives in the handoff between cognition and
1555
00:57:33,200 –> 00:57:34,200
action.
1556
00:57:34,200 –> 00:57:35,200
Cognition proposes.
1557
00:57:35,200 –> 00:57:36,200
Fine.
1558
00:57:36,200 –> 00:57:39,560
But the moment you operationalize an output, send the email, deny the request, contain
1559
00:57:39,560 –> 00:57:44,040
the account, publish the policy interpretation you moved into action and action creates consequences
1560
00:57:44,040 –> 00:57:45,720
whether you meant to or not.
1561
00:57:45,720 –> 00:57:50,360
So the judgment moment must include an ethics check that is almost offensively simple.
1562
00:57:50,360 –> 00:57:51,360
Who is affected?
1563
00:57:51,360 –> 00:57:52,360
What is the harm?
1564
00:57:52,360 –> 00:57:54,560
If this is wrong, what is the appeal path if we got it wrong?
1565
00:57:54,560 –> 00:57:58,680
If you can’t answer those questions, you are not ready to operationalize the output.
1566
00:57:58,680 –> 00:58:00,520
You’re still thinking or pretending you are.
1567
00:58:00,520 –> 00:58:04,760
This is why the AI mindset shift is not about being nicer to the machine or learning more
1568
00:58:04,760 –> 00:58:05,760
prompt tricks.
1569
00:58:05,760 –> 00:58:09,880
It’s about reasserting human agency where the platform makes agency easy to forget.
1570
00:58:09,880 –> 00:58:11,720
AI will generate text.
1571
00:58:11,720 –> 00:58:12,720
That’s the cheap part now.
1572
00:58:12,720 –> 00:58:14,520
The expensive part is context.
1573
00:58:14,520 –> 00:58:17,160
Knowing what the text means in the world you actually live in.
1574
00:58:17,160 –> 00:58:19,480
And the most expensive part is ethics.
1575
00:58:19,480 –> 00:58:22,000
Owning who gets hurt when you pretend the machine made the call.
1576
00:58:22,000 –> 00:58:26,560
If you can hold context in ethics together, you can use AI without dissolving responsibility.
1577
00:58:26,560 –> 00:58:30,960
If you can’t, the organization will keep scaling fluency and calling it competence.
1578
00:58:30,960 –> 00:58:35,760
And that’s how judgment erodes quietly until the day it becomes undeniable.
1579
00:58:35,760 –> 00:58:38,800
The atrophy pattern, why junior roles break first?
1580
00:58:38,800 –> 00:58:42,680
If you want to see the future of AI in your organization, don’t watch the executives.
1581
00:58:42,680 –> 00:58:43,880
Watch the junior roles.
1582
00:58:43,880 –> 00:58:45,280
They fail first.
1583
00:58:45,280 –> 00:58:49,280
Not because they’re less capable, but because they sit closest to the work that AI replaces
1584
00:58:49,280 –> 00:58:55,680
cleanly, drafting, summarizing, formatting, translating chaos into something that looks coherent.
1585
00:58:55,680 –> 00:58:57,600
And that work was never just output.
1586
00:58:57,600 –> 00:58:58,920
It was apprenticeship.
1587
00:58:58,920 –> 00:59:02,200
Junior roles learn judgment by doing the boring parts under supervision.
1588
00:59:02,200 –> 00:59:06,320
They learn what a good answer looks like because they had to produce 10 bad ones and get
1589
00:59:06,320 –> 00:59:07,320
corrected.
1590
00:59:07,320 –> 00:59:10,360
They learn what matters because someone read line they’re draft and told them why.
1591
00:59:10,360 –> 00:59:14,120
They learn what’s defensible because their manager asked, “How do you know that?”
1592
00:59:14,120 –> 00:59:15,600
And they had to point to evidence.
1593
00:59:15,600 –> 00:59:19,440
AI short circuits that entire loop because now the junior can generate a competent looking
1594
00:59:19,440 –> 00:59:23,560
artifact on the first try, not a correct artifact, a competent looking one.
1595
00:59:23,560 –> 00:59:27,720
And the organization being addicted to speed will accept looks fine as proof of progress,
1596
00:59:27,720 –> 00:59:30,400
especially when the writing is clean and the bullets are crisp.
1597
00:59:30,400 –> 00:59:33,160
This is how capability collapses while throughput rises.
1598
00:59:33,160 –> 00:59:37,800
The junior never builds the muscle for framing the problem, selecting sources, checking assumptions
1599
00:59:37,800 –> 00:59:39,120
or defending trade-offs.
1600
00:59:39,120 –> 00:59:42,840
They build a different muscle, prompt iteration and output polishing.
1601
00:59:42,840 –> 00:59:43,840
That is not judgment.
1602
00:59:43,840 –> 00:59:45,360
That is interface literacy.
1603
00:59:45,360 –> 00:59:49,080
And then leadership acts surprised when the next generation can’t run the system without
1604
00:59:49,080 –> 00:59:50,080
the tool.
1605
00:59:50,080 –> 00:59:51,960
Here’s the structural reason this hits junior’s harder.
1606
00:59:51,960 –> 00:59:57,000
Their work is template shaped, status updates, notes, first drafts, customer emails, requirements,
1607
00:59:57,000 –> 00:59:58,680
summaries, basic analysis.
1608
00:59:58,680 –> 01:00:00,760
AI is an infinite template engine.
1609
01:00:00,760 –> 01:00:05,280
So the junior’s role gets absorbed into the machine’s lowest cost capability and the organization
1610
01:00:05,280 –> 01:00:06,800
calls it efficiency.
1611
01:00:06,800 –> 01:00:09,960
But apprenticeship isn’t efficient, it’s expensive by design.
1612
01:00:09,960 –> 01:00:15,240
It forces slow thinking, it forces error, it forces correction, it forces the junior
1613
01:00:15,240 –> 01:00:17,760
to internalize standards instead of borrowing them.
1614
01:00:17,760 –> 01:00:20,720
When you remove that friction you don’t create a faster junior.
1615
01:00:20,720 –> 01:00:24,480
You create a person who can move text around without understanding what it means and you’ll
1616
01:00:24,480 –> 01:00:28,320
still promote them because the surface level signals look good.
1617
01:00:28,320 –> 01:00:31,480
Now add the second failure mode, overconfidence.
1618
01:00:31,480 –> 01:00:35,200
Fluent text triggers authority bias, people read something well written and assume it was
1619
01:00:35,200 –> 01:00:36,200
well thought.
1620
01:00:36,200 –> 01:00:41,560
AI exploits that bias perfectly because it produces tone, structure and certainty on demand.
1621
01:00:41,560 –> 01:00:44,320
So the junior becomes dangerous in a way they weren’t before.
1622
01:00:44,320 –> 01:00:47,560
They can ship plausible decisions faster than they can recognize risk.
1623
01:00:47,560 –> 01:00:48,720
That’s not a moral indictment.
1624
01:00:48,720 –> 01:00:51,840
It’s a predictable outcome of how humans evaluate language.
1625
01:00:51,840 –> 01:00:54,800
And this is where the atrophy pattern becomes visible.
1626
01:00:54,800 –> 01:00:56,120
Strong performers get stronger.
1627
01:00:56,120 –> 01:00:57,360
Weak performers get hidden.
1628
01:00:57,360 –> 01:00:59,520
A strong junior uses AI as scaffolding.
1629
01:00:59,520 –> 01:01:00,720
They ask better questions.
1630
01:01:00,720 –> 01:01:01,720
They validate outputs.
1631
01:01:01,720 –> 01:01:04,000
They treat the model as a sparring partner.
1632
01:01:04,000 –> 01:01:07,320
They improve faster because they have a base layer of judgment and they use the tool
1633
01:01:07,320 –> 01:01:08,320
to extend it.
1634
01:01:08,320 –> 01:01:11,080
A weak junior uses AI as an authority proxy.
1635
01:01:11,080 –> 01:01:12,880
They accept the first plausible answer.
1636
01:01:12,880 –> 01:01:15,160
They stop learning the underlying domain.
1637
01:01:15,160 –> 01:01:16,840
They confuse speed with competence.
1638
01:01:16,840 –> 01:01:20,600
And because the artifacts look professional, the organization can’t tell the difference until
1639
01:01:20,600 –> 01:01:23,720
the person is in a role where mistakes have blast radius.
1640
01:01:23,720 –> 01:01:25,800
That widening gap is the real workforce risk.
1641
01:01:25,800 –> 01:01:28,040
AI doesn’t democratize expertise.
1642
01:01:28,040 –> 01:01:30,320
It amplifies existing judgment disparities.
1643
01:01:30,320 –> 01:01:32,920
You already see this pattern in every other system.
1644
01:01:32,920 –> 01:01:34,240
Tools don’t fix bad thinking.
1645
01:01:34,240 –> 01:01:35,240
They scale it.
1646
01:01:35,240 –> 01:01:38,920
AI is just the first tool that produces outputs that look like thinking.
1647
01:01:38,920 –> 01:01:40,960
So what happens organizationally?
1648
01:01:40,960 –> 01:01:45,680
Many people stop delegating work to juniors and start delegating judgment to the AI.
1649
01:01:45,680 –> 01:01:48,960
Juniors stop being trained and start being throughput amplifiers.
1650
01:01:48,960 –> 01:01:52,760
Managers stop coaching and start reviewing polish drafts that contain hidden errors.
1651
01:01:52,760 –> 01:01:54,400
The feedback loop degrades.
1652
01:01:54,400 –> 01:01:55,760
Correction becomes sporadic.
1653
01:01:55,760 –> 01:01:56,760
Standards drift.
1654
01:01:56,760 –> 01:01:59,320
And the organization quietly loses its bench strengths.
1655
01:01:59,320 –> 01:02:03,560
Then six months later leadership complains about AI skills gaps and buys more training.
1656
01:02:03,560 –> 01:02:05,320
But the gap isn’t AI usage.
1657
01:02:05,320 –> 01:02:06,320
It’s judgment development.
1658
01:02:06,320 –> 01:02:09,800
And this is why the will train later lie is especially toxic for junior roles.
1659
01:02:09,800 –> 01:02:13,680
The first two weeks set defaults and juniors are the ones most likely to adopt defaults
1660
01:02:13,680 –> 01:02:14,680
as doctrine.
1661
01:02:14,680 –> 01:02:16,800
If the default is copy paste, they will copy paste.
1662
01:02:16,800 –> 01:02:20,240
If the default is the AI knows, they will believe it.
1663
01:02:20,240 –> 01:02:23,680
If the default is speed gets rewarded, they will optimize for speed.
1664
01:02:23,680 –> 01:02:24,760
That’s not rebellion.
1665
01:02:24,760 –> 01:02:26,280
That’s adaptation.
1666
01:02:26,280 –> 01:02:30,560
So if you care about talent, you have to treat AI as a threat to apprenticeship, not just
1667
01:02:30,560 –> 01:02:31,640
a productivity win.
1668
01:02:31,640 –> 01:02:33,640
You need design judgment moments.
1669
01:02:33,640 –> 01:02:36,280
Not only for risk and compliance, but for learning.
1670
01:02:36,280 –> 01:02:41,280
This is where juniors must explain rationale, site sources and state confidence before their
1671
01:02:41,280 –> 01:02:42,880
work becomes action.
1672
01:02:42,880 –> 01:02:47,040
Otherwise you will build an organization that ships faster, sounds smarter and understands
1673
01:02:47,040 –> 01:02:48,040
less.
1674
01:02:48,040 –> 01:02:49,040
And that is not transformation.
1675
01:02:49,040 –> 01:02:52,080
That is atrophy with better formatting.
1676
01:02:52,080 –> 01:02:53,080
Organizational readiness.
1677
01:02:53,080 –> 01:02:55,160
AI, psychological safety.
1678
01:02:55,160 –> 01:02:59,080
So the junior roles break first, but the organization fails for a different reason.
1679
01:02:59,080 –> 01:03:03,280
It fails because people stop speaking clearly when they don’t feel safe to be wrong.
1680
01:03:03,280 –> 01:03:05,560
AI, psychological safety isn’t a soft concept.
1681
01:03:05,560 –> 01:03:09,520
It’s an operational requirement because cognitive collaboration requires a behavior most enterprises
1682
01:03:09,520 –> 01:03:12,160
actively punish, saying, “I don’t know, I disagree.”
1683
01:03:12,160 –> 01:03:14,080
Or, “This output feels wrong.”
1684
01:03:14,080 –> 01:03:17,360
And if people can’t say that out loud, the system will drift until it hits a wall.
1685
01:03:17,360 –> 01:03:21,560
There are two kinds of fear that show up immediately when you introduce co-pilot at scale.
1686
01:03:21,560 –> 01:03:24,600
The first is the obvious one, fear of being replaced.
1687
01:03:24,600 –> 01:03:29,680
That fear makes people hide work, hoard context, and avoid experimenting in ways that would
1688
01:03:29,680 –> 01:03:31,760
make them look less necessary.
1689
01:03:31,760 –> 01:03:35,720
They reduce transparency because transparency feels like volunteering for redundancy.
1690
01:03:35,720 –> 01:03:39,200
The second fear is quieter and more corrosive.
1691
01:03:39,200 –> 01:03:43,080
Fear of being wrong on behalf of the AI.
1692
01:03:43,080 –> 01:03:47,200
Because when you send an AI drafted email or share an AI-generated summary, you aren’t
1693
01:03:47,200 –> 01:03:48,520
just wrong privately.
1694
01:03:48,520 –> 01:03:50,200
You are wrong in a way that looks official.
1695
01:03:50,200 –> 01:03:51,800
You are wrong in a way that spreads.
1696
01:03:51,800 –> 01:03:55,600
You are wrong in a way that can be screenshot, forwarded, and quoted later.
1697
01:03:55,600 –> 01:03:56,600
So people adapt.
1698
01:03:56,600 –> 01:03:59,080
They stop experimenting in high visibility channels.
1699
01:03:59,080 –> 01:04:01,000
They use AI privately and quietly.
1700
01:04:01,000 –> 01:04:04,760
They paste the output with minimal edits because editing takes time.
1701
01:04:04,760 –> 01:04:07,440
And if they are going to be blamed anyway, they might as well move fast.
1702
01:04:07,440 –> 01:04:10,880
Or they avoid AI entirely and resent the people who use it.
1703
01:04:10,880 –> 01:04:15,480
Because the cultural signal becomes, speed is rewarded, mistakes are punished, and ambiguity
1704
01:04:15,480 –> 01:04:16,640
is your problem.
1705
01:04:16,640 –> 01:04:18,320
That is not readiness.
1706
01:04:18,320 –> 01:04:20,200
That is a pressure cooker.
1707
01:04:20,200 –> 01:04:22,640
Now add the enterprise’s favorite habit.
1708
01:04:22,640 –> 01:04:23,840
Perfection culture.
1709
01:04:23,840 –> 01:04:27,720
Most organizations have spent decades training people to pretend certainty.
1710
01:04:27,720 –> 01:04:32,240
They reward the person who sounds confident, not the person who asks the better question.
1711
01:04:32,240 –> 01:04:36,360
They reward the deck that looks complete, not the decision that’s defensible.
1712
01:04:36,360 –> 01:04:38,280
They treat ambiguity as incompetence.
1713
01:04:38,280 –> 01:04:40,560
AI turns that culture into a liability.
1714
01:04:40,560 –> 01:04:43,840
Because AI produces certainty theatre at industrial scale.
1715
01:04:43,840 –> 01:04:48,120
If the organization already struggles to tolerate uncertainty, it will accept confident output
1716
01:04:48,120 –> 01:04:49,120
as relief.
1717
01:04:49,120 –> 01:04:50,400
Not because it’s right.
1718
01:04:50,400 –> 01:04:53,400
Because it ends the uncomfortable pause where someone has to think.
1719
01:04:53,400 –> 01:04:56,880
So psychological safety in an AI environment is not about protecting feelings.
1720
01:04:56,880 –> 01:04:58,560
It’s about protecting dissent.
1721
01:04:58,560 –> 01:05:02,680
It’s about creating permission structurally for people to say three things without career
1722
01:05:02,680 –> 01:05:03,680
damage.
1723
01:05:03,680 –> 01:05:04,960
This is a draft.
1724
01:05:04,960 –> 01:05:06,800
This is a hypothesis.
1725
01:05:06,800 –> 01:05:08,760
This needs escalation.
1726
01:05:08,760 –> 01:05:13,000
If those sentences don’t exist in your culture, co-pilot will become a narrative engine
1727
01:05:13,000 –> 01:05:15,920
that nobody challenges until the consequences arrive.
1728
01:05:15,920 –> 01:05:17,520
And the consequences always arrive.
1729
01:05:17,520 –> 01:05:19,280
Here’s what most organizations get wrong.
1730
01:05:19,280 –> 01:05:21,400
They treat safety as a communications problem.
1731
01:05:21,400 –> 01:05:22,840
They run an awareness campaign.
1732
01:05:22,840 –> 01:05:24,920
They say it’s okay to experiment.
1733
01:05:24,920 –> 01:05:26,680
They tell people there are no stupid questions.
1734
01:05:26,680 –> 01:05:29,600
They publish a Viva-Engaged community with weekly tips.
1735
01:05:29,600 –> 01:05:30,600
They make posters.
1736
01:05:30,600 –> 01:05:31,600
They do prompt-a-thons.
1737
01:05:31,600 –> 01:05:33,280
They do all the visible things.
1738
01:05:33,280 –> 01:05:36,320
Then the first person gets burned for a mistake made with AI.
1739
01:05:36,320 –> 01:05:38,240
Maybe they send a draft with the wrong number.
1740
01:05:38,240 –> 01:05:40,280
Maybe they summarize the meeting incorrectly.
1741
01:05:40,280 –> 01:05:42,600
Maybe they use the wrong tone in a sensitive message.
1742
01:05:42,600 –> 01:05:44,680
Maybe they interpreted policy too confidently.
1743
01:05:44,680 –> 01:05:45,880
It doesn’t matter.
1744
01:05:45,880 –> 01:05:47,200
What matters is the outcome.
1745
01:05:47,200 –> 01:05:50,680
The organization punishes the mistake as if it was traditional negligence instead of
1746
01:05:50,680 –> 01:05:54,480
recognizing it as a predictable failure mode of probabilistic cognition.
1747
01:05:54,480 –> 01:05:56,440
And at that moment experimentation stops.
1748
01:05:56,440 –> 01:05:58,120
Not officially, quietly.
1749
01:05:58,120 –> 01:05:59,920
People don’t need a memo to learn what is safe.
1750
01:05:59,920 –> 01:06:03,160
They watch what happens to the first person who gets embarrassed in public.
1751
01:06:03,160 –> 01:06:05,480
The culture sets the rule in a single incident.
1752
01:06:05,480 –> 01:06:10,080
So readiness requires something more disciplined, a shared language for good AI usage that teams
1753
01:06:10,080 –> 01:06:12,080
can use to self-correct without shame.
1754
01:06:12,080 –> 01:06:13,840
Not use co-pilot more.
1755
01:06:13,840 –> 01:06:15,320
Use these sentences.
1756
01:06:15,320 –> 01:06:17,040
My confidence is low.
1757
01:06:17,040 –> 01:06:18,480
Here are the sources I used.
1758
01:06:18,480 –> 01:06:20,000
Here’s what I didn’t verify.
1759
01:06:20,000 –> 01:06:21,520
Here’s the decision owner.
1760
01:06:21,520 –> 01:06:25,040
Those statements are how you keep cognition from becoming doctrine.
1761
01:06:25,040 –> 01:06:29,120
They also reduce the load on managers because managers can’t review everything.
1762
01:06:29,120 –> 01:06:32,720
They need signals that tell them when to trust, when to dig, and when to escalate.
1763
01:06:32,720 –> 01:06:36,200
Now connect this to governance because safety without guardrails becomes chaos.
1764
01:06:36,200 –> 01:06:39,280
You can’t tell people to experiment and then give them no boundaries.
1765
01:06:39,280 –> 01:06:41,800
That creates fear too because nobody knows what is allowed.
1766
01:06:41,800 –> 01:06:44,160
People either freeze or they root around controls.
1767
01:06:44,160 –> 01:06:46,720
So psychological safety has a minimum viable boundary.
1768
01:06:46,720 –> 01:06:47,880
Make judgment visible.
1769
01:06:47,880 –> 01:06:49,160
That’s the whole requirement.
1770
01:06:49,160 –> 01:06:53,040
If someone uses AI to produce an output that will influence a decision, they must attach
1771
01:06:53,040 –> 01:06:54,400
a judgment statement.
1772
01:06:54,400 –> 01:06:56,760
But it is what it’s for and who owns it.
1773
01:06:56,760 –> 01:06:58,240
If they can’t, they don’t ship it.
1774
01:06:58,240 –> 01:06:59,920
This is not a training initiative.
1775
01:06:59,920 –> 01:07:01,560
It is behavioral design.
1776
01:07:01,560 –> 01:07:05,760
And it’s how you create a culture where people can collaborate with AI without surrendering
1777
01:07:05,760 –> 01:07:06,920
responsibility.
1778
01:07:06,920 –> 01:07:09,260
Because the goal isn’t to make everyone comfortable.
1779
01:07:09,260 –> 01:07:13,080
The goal is to make avoidance impossible and descent safe enough to surface before the
1780
01:07:13,080 –> 01:07:15,760
incident review does it for you.
1781
01:07:15,760 –> 01:07:19,080
Minimal irreversible prescriptions that remove deniability.
1782
01:07:19,080 –> 01:07:21,600
So here’s where most episodes like this go off the rails.
1783
01:07:21,600 –> 01:07:25,080
They hear judgment and they immediately build a maturity model.
1784
01:07:25,080 –> 01:07:29,400
A road map, a center of excellence, a 12 week adoption program with badges and a sharepoint
1785
01:07:29,400 –> 01:07:32,040
hub and a congratulatory town hall.
1786
01:07:32,040 –> 01:07:35,040
That is how organizations turn a real problem into a safe ceremony.
1787
01:07:35,040 –> 01:07:36,480
This needs the opposite.
1788
01:07:36,480 –> 01:07:38,800
Minimal prescriptions that remove deniability.
1789
01:07:38,800 –> 01:07:41,320
Things you can’t agree with and then ignore.
1790
01:07:41,320 –> 01:07:44,120
Things that make avoidance visible immediately in normal work.
1791
01:07:44,120 –> 01:07:45,120
Three of them.
1792
01:07:45,120 –> 01:07:46,120
That’s it.
1793
01:07:46,120 –> 01:07:47,120
First, decision logs.
1794
01:07:47,120 –> 01:07:48,440
Not a repository, not a weekly project.
1795
01:07:48,440 –> 01:07:51,640
A single page artifact that exists because memory is not governance.
1796
01:07:51,640 –> 01:07:55,280
Every meaningful decision influenced by AI gets a decision log.
1797
01:07:55,280 –> 01:07:56,280
One page.
1798
01:07:56,280 –> 01:07:57,280
Short.
1799
01:07:57,280 –> 01:07:58,280
Owned by a named human.
1800
01:07:58,280 –> 01:08:00,080
Stored where the action system can reference it.
1801
01:08:00,080 –> 01:08:03,000
It contains five fields and none of them are optional.
1802
01:08:03,000 –> 01:08:04,000
What is the decision?
1803
01:08:04,000 –> 01:08:05,000
Who owns it?
1804
01:08:05,000 –> 01:08:06,240
What options were considered?
1805
01:08:06,240 –> 01:08:07,240
What tradeoff was chosen?
1806
01:08:07,240 –> 01:08:08,600
What would cause a reversal?
1807
01:08:08,600 –> 01:08:10,800
That last one matters more than people admit.
1808
01:08:10,800 –> 01:08:11,960
Because it forces honesty.
1809
01:08:11,960 –> 01:08:15,560
It makes you state what evidence would change your mind, which is how you prevent post-hoc
1810
01:08:15,560 –> 01:08:16,720
rationalization.
1811
01:08:16,720 –> 01:08:20,840
The log explicitly says whether AI was used for cognition, summarization, drafting, option
1812
01:08:20,840 –> 01:08:22,800
generation, not to shame anyone.
1813
01:08:22,800 –> 01:08:27,000
To make the epistemology explicit, so later when someone asks why did we do this, the answer
1814
01:08:27,000 –> 01:08:29,440
isn’t because the email sounded reasonable.
1815
01:08:29,440 –> 01:08:32,040
Second, judgment moments embedded in workflow.
1816
01:08:32,040 –> 01:08:36,200
This is the part that most organizations avoid because it feels like slowing down.
1817
01:08:36,200 –> 01:08:38,400
It does slow down on purpose.
1818
01:08:38,400 –> 01:08:42,240
A judgment moment is a force checkpoint where a human must classify intent before the
1819
01:08:42,240 –> 01:08:48,960
system allows action to proceed, not reviewed, not looks good, classification, severity,
1820
01:08:48,960 –> 01:08:53,400
risk posture, exception approved or denied, escalation required or not, customer impact
1821
01:08:53,400 –> 01:08:57,440
statement accepted or rejected, change blast radius accepted or rejected.
1822
01:08:57,440 –> 01:08:59,360
If you can’t classify, you can’t proceed.
1823
01:08:59,360 –> 01:09:02,400
This is what turns thinking into accountable movement.
1824
01:09:02,400 –> 01:09:06,440
It’s also what stops Copilot from becoming a narrative engine that quietly drives decisions
1825
01:09:06,440 –> 01:09:07,440
through momentum.
1826
01:09:07,440 –> 01:09:09,000
And no, this is not bureaucracy.
1827
01:09:09,000 –> 01:09:10,560
This is entropy management.
1828
01:09:10,560 –> 01:09:14,320
Because the organization will always drift toward the easiest path.
1829
01:09:14,320 –> 01:09:19,240
Fast outputs, unclear ownership and plausible deniability when it fails.
1830
01:09:19,240 –> 01:09:21,680
Judgment moments are how you make that drift expensive.
1831
01:09:21,680 –> 01:09:25,640
Third, name decision owners, individuals, not teams.
1832
01:09:25,640 –> 01:09:30,320
The phrase the business decided is a compliance evasion tactic, so is IT approved it, so is
1833
01:09:30,320 –> 01:09:36,480
HR said, teams execute, committees advise, tools propose only an individual can be held
1834
01:09:36,480 –> 01:09:37,480
accountable.
1835
01:09:37,480 –> 01:09:41,840
So every judgment moment needs a person attached, their role can be delegated, their decision
1836
01:09:41,840 –> 01:09:43,120
can be informed by others.
1837
01:09:43,120 –> 01:09:46,920
But their name is the anchor that stops responsibility from diffusing into the org chart.
1838
01:09:46,920 –> 01:09:49,520
Now if you do those three things, here’s what changes.
1839
01:09:49,520 –> 01:09:53,760
AI stops being a magic answer machine and becomes what it actually is, a cognitive surface
1840
01:09:53,760 –> 01:09:58,200
that generates options and your organization becomes what it has to be, a place where choices
1841
01:09:58,200 –> 01:10:00,480
are owned, recorded and enforced.
1842
01:10:00,480 –> 01:10:04,880
This is also where the M365 to service now handoff stops being a vague integration story
1843
01:10:04,880 –> 01:10:06,760
and becomes a governance mechanism.
1844
01:10:06,760 –> 01:10:09,800
Co-pilot produces synthesis and proposed interpretations.
1845
01:10:09,800 –> 01:10:14,240
Fine, but the only thing that can move an organization is a decision that triggers enforced
1846
01:10:14,240 –> 01:10:15,240
consequences.
1847
01:10:15,240 –> 01:10:19,120
That means co-pilot output must land in a workflow that contains a judgment moment,
1848
01:10:19,120 –> 01:10:21,160
a named owner and a decision log reference.
1849
01:10:21,160 –> 01:10:23,800
For example, incident triage.
1850
01:10:23,800 –> 01:10:26,640
Co-pilot proposes three plausible interpretations.
1851
01:10:26,640 –> 01:10:31,200
Credential theft, misconfigured conditional access or benign automation gone noisy.
1852
01:10:31,200 –> 01:10:32,720
It provides evidence links.
1853
01:10:32,720 –> 01:10:33,720
Great.
1854
01:10:33,720 –> 01:10:35,680
Then the human selects intent.
1855
01:10:35,680 –> 01:10:37,960
Credential, credential theft and select severity.
1856
01:10:37,960 –> 01:10:38,960
High.
1857
01:10:38,960 –> 01:10:42,720
That selection writes the decision owner into the record and forces a rationale field.
1858
01:10:42,720 –> 01:10:46,200
Now the action system enforces containment steps and approvals.
1859
01:10:46,200 –> 01:10:47,520
It creates the audit trail.
1860
01:10:47,520 –> 01:10:49,760
It schedules the post-incident review.
1861
01:10:49,760 –> 01:10:53,400
It blocks the organization from improvising itself into inconsistency.
1862
01:10:53,400 –> 01:10:54,640
That’s the handoff.
1863
01:10:54,640 –> 01:10:56,280
Cognition proposes possibilities.
1864
01:10:56,280 –> 01:10:57,760
Judgment selects intent.
1865
01:10:57,760 –> 01:10:59,600
Action enforces consequence.
1866
01:10:59,600 –> 01:11:01,200
And notice what disappears.
1867
01:11:01,200 –> 01:11:03,160
The ability to pretend.
1868
01:11:03,160 –> 01:11:06,320
You can’t say the system said so because the system didn’t decide.
1869
01:11:06,320 –> 01:11:07,640
A human did.
1870
01:11:07,640 –> 01:11:10,600
You can’t say nobody approved this because the owner is recorded.
1871
01:11:10,600 –> 01:11:14,440
You can’t say we didn’t know because the options and evidence are attached.
1872
01:11:14,440 –> 01:11:16,760
This is why the prescriptions must be irreversible.
1873
01:11:16,760 –> 01:11:18,240
They don’t teach people to be better.
1874
01:11:18,240 –> 01:11:22,520
They force the enterprise to behave like it believes its own risk post-gematters.
1875
01:11:22,520 –> 01:11:25,840
You want the simplest test for whether your AI program is real?
1876
01:11:25,840 –> 01:11:28,880
Pick any AI influence decision from last week and ask.
1877
01:11:28,880 –> 01:11:30,120
Where is the judgment moment?
1878
01:11:30,120 –> 01:11:31,120
Who owned it?
1879
01:11:31,120 –> 01:11:32,120
And where is the log?
1880
01:11:32,120 –> 01:11:35,520
You can’t answer in under 60 seconds you didn’t scale capability.
1881
01:11:35,520 –> 01:11:37,320
You scaled plausible deniability?
1882
01:11:37,320 –> 01:11:38,320
The future of work.
1883
01:11:38,320 –> 01:11:40,240
From execution to evaluation.
1884
01:11:40,240 –> 01:11:44,800
So once you accept the triad, cognition, judgment, action, you also accept the future of work
1885
01:11:44,800 –> 01:11:46,680
you are trying to avoid.
1886
01:11:46,680 –> 01:11:48,000
Execution becomes cheap.
1887
01:11:48,000 –> 01:11:49,760
Evaluation becomes expensive.
1888
01:11:49,760 –> 01:11:50,920
That isn’t a slogan.
1889
01:11:50,920 –> 01:11:52,920
It’s the new cost model of knowledge work.
1890
01:11:52,920 –> 01:11:55,880
Most organizations build their identity around execution.
1891
01:11:55,880 –> 01:11:59,160
Shipping documents, shipping decks, shipping tickets, shipping updates.
1892
01:11:59,160 –> 01:12:03,480
They rewarded the person who could produce the artifact fastest because artifacts were expensive.
1893
01:12:03,480 –> 01:12:05,480
AI collapses that scarcity.
1894
01:12:05,480 –> 01:12:07,320
The artifact is no longer the proof of work.
1895
01:12:07,320 –> 01:12:09,440
It’s the receipt that work might have happened.
1896
01:12:09,440 –> 01:12:12,640
And that distinction matters because enterprises don’t run on receipts.
1897
01:12:12,640 –> 01:12:13,720
They run on consequences.
1898
01:12:13,720 –> 01:12:15,160
So the future looks like this.
1899
01:12:15,160 –> 01:12:17,880
Meeting stop being places where people create content in real time.
1900
01:12:17,880 –> 01:12:18,880
AI will do that.
1901
01:12:18,880 –> 01:12:23,160
Notes, summaries, action items, option lists, draft narratives, done.
1902
01:12:23,160 –> 01:12:26,880
The meeting becomes the place where someone has to choose what matters, what changes,
1903
01:12:26,880 –> 01:12:28,280
and what gets enforced.
1904
01:12:28,280 –> 01:12:30,800
Which means the real meeting isn’t the calendar invite.
1905
01:12:30,800 –> 01:12:34,920
The real meeting is the judgment moment embedded in the workflow afterward.
1906
01:12:34,920 –> 01:12:38,520
That’s where the organization either becomes accountable or becomes theatrical.
1907
01:12:38,520 –> 01:12:39,520
Documents change too.
1908
01:12:39,520 –> 01:12:41,440
A document is no longer a deliverable.
1909
01:12:41,440 –> 01:12:43,480
It’s a dialogue between cognition and judgment.
1910
01:12:43,480 –> 01:12:45,600
AI produces drafts.
1911
01:12:45,600 –> 01:12:48,080
Humans state intent, constraints, and ownership.
1912
01:12:48,080 –> 01:12:51,720
The value shifts from did we write it to, can we defend it?
1913
01:12:51,720 –> 01:12:53,080
So the writing isn’t the work.
1914
01:12:53,080 –> 01:12:54,080
The reasoning is.
1915
01:12:54,080 –> 01:12:56,560
And the most uncomfortable shift is for leadership.
1916
01:12:56,560 –> 01:13:01,520
Leadership has spent years outsourcing reasoning to process committees and status reporting.
1917
01:13:01,520 –> 01:13:03,160
AI makes that outsourcing easier.
1918
01:13:03,160 –> 01:13:04,520
That’s why this episode exists.
1919
01:13:04,520 –> 01:13:07,560
In the AI era, leaders don’t get judged on throughput.
1920
01:13:07,560 –> 01:13:09,160
Thruput is free.
1921
01:13:09,160 –> 01:13:11,240
They get judged on decision quality.
1922
01:13:11,240 –> 01:13:13,000
Decision quality shows up in three ways.
1923
01:13:13,000 –> 01:13:16,320
Fewer reversals, fewer surprises, and fewer unowned outcomes.
1924
01:13:16,320 –> 01:13:20,800
It shows up when incident response has a record, when policy exceptions have a rationale,
1925
01:13:20,800 –> 01:13:25,320
when changes have an owner, when forecast trigger actual constraints instead of more slides.
1926
01:13:25,320 –> 01:13:27,320
And if that sounds like bureaucracy, good.
1927
01:13:27,320 –> 01:13:29,200
That’s the part you were already living with.
1928
01:13:29,200 –> 01:13:33,520
The difference is that now bureaucracy is optional and judgment is mandatory.
1929
01:13:33,520 –> 01:13:37,040
Because if you don’t embed judgment into the system, you don’t get fast innovation
1930
01:13:37,040 –> 01:13:38,040
here.
1931
01:13:38,040 –> 01:13:39,040
You get fast entropy.
1932
01:13:39,040 –> 01:13:42,160
You get people shipping answer-shaped text into the organization without any enforcement
1933
01:13:42,160 –> 01:13:46,200
pathway attached and then acting surprised when it becomes doctrine by repetition.
1934
01:13:46,200 –> 01:13:49,480
This is also where agents become dangerous in a very specific way.
1935
01:13:49,480 –> 01:13:50,480
Agents will act.
1936
01:13:50,480 –> 01:13:51,480
They will call APIs.
1937
01:13:51,480 –> 01:13:52,480
They will execute tasks.
1938
01:13:52,480 –> 01:13:53,480
They will move data.
1939
01:13:53,480 –> 01:13:54,480
They will send messages.
1940
01:13:54,480 –> 01:13:55,480
They will close tickets.
1941
01:13:55,480 –> 01:14:00,240
They will do all the things your organization currently does slowly and inconsistently.
1942
01:14:00,240 –> 01:14:03,120
But an agent cannot own the decision that justified the action.
1943
01:14:03,120 –> 01:14:07,400
So if you scale agents without scaling judgment moments, you don’t create autonomy.
1944
01:14:07,400 –> 01:14:09,360
You create automation without accountability.
1945
01:14:09,360 –> 01:14:12,200
You create conditional chaos with a nicer user interface.
1946
01:14:12,200 –> 01:14:16,840
Over time, the organization drifts into a probabilistic operating model.
1947
01:14:16,840 –> 01:14:17,840
Outputs happen.
1948
01:14:17,840 –> 01:14:19,640
Actions happen and nobody can explain why.
1949
01:14:19,640 –> 01:14:20,640
That is not modern work.
1950
01:14:20,640 –> 01:14:22,040
That is unmanaged delegation.
1951
01:14:22,040 –> 01:14:26,400
So the only sustainable pattern is this, cognition proposes, judgment selects action
1952
01:14:26,400 –> 01:14:27,400
and forces.
1953
01:14:27,400 –> 01:14:29,080
And the trick is not building more intelligence.
1954
01:14:29,080 –> 01:14:31,440
The trick is building operational gravity.
1955
01:14:31,440 –> 01:14:35,400
Constraints that force a human to declare intent before the system is allowed to move.
1956
01:14:35,400 –> 01:14:40,640
That is what separates an enterprise that scales trust from an enterprise that scales confusion.
1957
01:14:40,640 –> 01:14:43,680
Because AI doesn’t eliminate work, it relocates it.
1958
01:14:43,680 –> 01:14:48,160
It moves the effort from “make the thing” to “decide the thing”.
1959
01:14:48,160 –> 01:14:51,680
From “write the answer to own the consequences”.
1960
01:14:51,680 –> 01:14:54,560
And produce the artifact to defend the rationale.
1961
01:14:54,560 –> 01:14:58,180
And if your organization refuses that shift, it will keep buying intelligence and keep
1962
01:14:58,180 –> 01:15:02,080
suffering the same failures just faster and with better formatting.
1963
01:15:02,080 –> 01:15:03,080
Conclusion
1964
01:15:03,080 –> 01:15:04,080
The behavior shift
1965
01:15:04,080 –> 01:15:06,240
Reintroduce judgment into the system.
1966
01:15:06,240 –> 01:15:08,000
AI scales whatever you are.
1967
01:15:08,000 –> 01:15:10,480
Clarity or confusion because it can’t own your decisions.
1968
01:15:10,480 –> 01:15:13,640
If you do one thing after this, stop asking what does the AI say?
1969
01:15:13,640 –> 01:15:15,640
To start asking who owns this decision?
1970
01:15:15,640 –> 01:15:16,640
Subscribe?
1971
01:15:16,640 –> 01:15:22,080
Go to the next episode on building judgment moments into the M365 service now operating model.