5 Comments
User's avatar
Dr Sam Illingworth's avatar

Yes John! I really wish more people were writing about how to own and scale the signal from their first forays into AI use rather than trying to retrofit it when the horse had bolted. 🙏

John Wernfeldt's avatar

yeah, “retrofit when the horse has bolted” is exactly the pattern, and the frustrating part is that the signal from the first foray is usually right there if someone owned the question of what they were actually trying to learn from it​​​​​​​​​​​​​​​​

Michael Argento's avatar

Completely agree. The pattern keeps repeating: governance tries to catch up after systems are already in motion.

The real unlock is making high-stakes outputs cryptographically accountable at creation time, so the signal is preserved before it can drift.

John Wernfeldt's avatar

yes totally agree

Michael Argento's avatar

“AI doesn’t fix weak governance. It amplifies it.” Agreed.

I’d add: the next step is making high-stakes outputs cryptographically accountable at creation time, not just governed after the fact.