face-value-cassie-kozyrkov-header-image

The human factor: What AI means for leadership with Cassie Kozyrkov, Google’s first Chief Decision Scientist

“The way your life turns out depends on luck and the quality of your decisions. You can only control one, which is what makes decisions so important.” 

This is the rallying cry of Cassie Kozyrkov, who advocates for the importance of decision-making both in our personal lives and the world of technology. As Google’s first Chief Decision Scientist, Kozyrkov trained over 20,000 Googlers in AI, data science, and decision intelligence — a discipline she defines as “turning information into better action at any scale, in any setting.”

Now a CEO, highly sought-after AI advisor, and author whose writing has reached millions of readers, she joined the Celonis Face Value series to clear up how we talk about AI and explore how leaders and enterprises can embrace it to unlock value. “AI is a tool for decision-making,” Kozyrkov says. “It’s also a product of decisions.”

Getting on the same page: What’s the hype really about?

We’re bombarded with talk about AI, but Kozyrkov argues we’re not always talking about the same thing.  She often observes people have “long, beautiful conversations with one another,” only to discover they’re on parallel tracks. “So I'd love to break this down a little bit and level-set before we roll back into what this means for leadership and for organizations,” she says. There are three groups with different perspectives and priorities around AI, and taking a look at them can help us understand today’s AI revolution:

  1. AI researchers: Theorists and mathematicians who invent general-purpose algorithms for others to use. Before AI was put into practice, they dominated the scene.  

  2. AI engineers: Folks who apply AI algorithms to enterprise problems at scale. Emerging in the 2010s, you’d find them at “the Netflixes, Googles, etc. of the world” because AI applications, in those days, were neither easy nor cheap. 

  3. AI users: A new, much larger group of individuals on the rise since late 2022, who wield AI tools to solve individual problems (ever used ChatGPT? You’re one of them).

“What’s this third group about?” Kozyrkov asks. “What would be the key word for what everybody is excited about?” The answer is interface. For the first time ever, we’re giving AI as AI to users — hundreds of millions of them. They can specify what they want AI systems to output for them in their native language, rather than an unnatural one like mathematics or programming. “Language is how we make sense of the world around us,” she explains, “and language is what the generative AI (GenAI) revolution is all about.”

Why leaders must now become "authors of meaning"

While traditional AI is for automating tasks where there’s one right answer (like identifying a cat in a photo), GenAI is for automating tasks where there are endless right answers. “We are allowing ourselves to speak with machines and do it with this greater range of possibility and greater range of volatility,” says Kozyrkov. Feed an AI assistant a few bullet points, for example, and ask it to craft a polite email. It can get the job done in infinite ways. Amid endless viable solutions, a new challenge arises for leadership: articulating the tangible value that GenAI can deliver to the enterprise. For individual users, it’s often enough that these tools feel useful. But at scale, impact demands to be measured. To effectively gauge the ROI of GenAI systems, leaders need to step up and become “authors of meaning.” They must define what value means for their organization and champion a new mindset around performance measurement. “The new paradigm is to say, ‘Hey, this is up to you as a leader,’” she says. “You have to set parameters and you have to translate infinity into a metric that your organization can deal with…this is going to raise the bar for you.”

It’s never the genie that’s dangerous, it’s the unskilled wisher

Kozyrkov emphasizes that decision-making becomes a critical skill for leaders managing AI systems.

“The thing with these systems that we call autonomous,” she says, “[is that] they're not autonomous.” The effects may be complex and lingering, but what AI produces is always a result of human decision-making. AI is a tool; it can’t take responsibility for itself.

“What I want to see less of,” Kozyrkov adds, “is people thinking that machines are intelligent on their behalf…That word — ’intelligent’ — is such a red herring.”

AI systems may perform well a majority of the time, but they still make mistakes, even when producing fluent outputs. When we forget that AI is a machine — and it may not have the right context, or we may not have made ourselves understood — we can easily fall into the trap of uncritically trusting these tools. The result is thoughtless decision-making, which has devastating consequences at scale. It’s therefore paramount for leaders to “inject thoughtfulness” into their interactions with AI systems, which more and more resemble magic lamps. “It’s never the genie that’s dangerous,” Kozyrkov says. “It’s the unskilled wisher.” She urges leaders to:

  • Know what to wish for and how to articulate it with maximum precision

  • Be able to check and score if what’s received on the other end is what they wished for

The true definition of AI-first

If our goal is to build a car that will take us to a desired destination, collecting piles upon piles of shiny new wheels, engines, and tires is an irrational approach to get us there. Similarly, businesses who implement AI for AI’s sake or to “keep up with the Joneses” misunderstand what it means to be AI-first. They also struggle to show the ROI for AI initiatives.

The first step in AI is knowing when it’s the right tool to use, Kozyrkov explains. If we know how to automate a task, it should be automated the traditional way with step-by-step instructions, or code. Only when a problem is ineffable — so complex that we can’t wrap our minds around it — should we use AI.  

She asks us to imagine a “massive attic filled with yesterday’s impossible problems and abandoned ideas.” An AI-first approach is one where we “stop thinking that attic is locked forever.” Instead, we open the door, consider which processes would be most valuable to the business if we automated them today, and reach higher — ideally with a good decision-maker leading the charge.

Nico Wada Headshot
Nico Wada
Writer

Nico Wada is a writer at Celonis. She has worked at leading agencies and fast-growing B2B companies across sectors like fintech and alternative data. When not writing, you’ll find her running around Central Park, at a reggae concert, or planning her next trip to Japan.

Dear visitor, you're using an outdated browser. Parts of this website will not work correctly. For a better experience, update or change your browser.