Manuel Haug author headshot

Talking to processes like people: How Process Intelligence enables AI to optimize business operations

Artificial Intelligence (AI) has the potential to revolutionize enterprise operations, but its effectiveness depends on the quality of the data and context it receives. Without a deep understanding of business processes, AI struggles to understand the unique way an organization runs. Celonis bridges this gap by providing AI with the critical input it needs — real-time, contextualized process intelligence — through its Process Intelligence Graph and AI-driven solutions like AgentC.

In the following interview, Manuel Haug, Field CTO at Celonis, speaks to Lucky Kuffer from D1g1tal Agenda about how Process Intelligence enhances AI’s ability to optimize enterprise processes. They discuss Celonis' integration of Large Language Models (LLMs), the role of the Process Intelligence Graph, and the future of Object-Centric Process Mining. Read on to explore how AI and process intelligence are transforming business operations.

Talking to processes like talking to people

d1g1talLOOPs: Mr. Haug, one statement from the Opening Keynote of Celosphere 2024 particularly stuck with me: “No AI without PI”, meaning to say no artificial intelligence without process intelligence. What does that mean?

Manuel Haug: In this context, “AI” refers to enterprise AI. This statement is the result of the experiences we’ve gathered over the past two years concerning the AI wave and Large Language Models (LLMs). Many of our customers have tried to improve enterprise processes using AI on their own but didn’t get far—to put it bluntly, they failed.

At the same time, through our intensive collaboration with customers, we have gained a deep understanding of what makes a process good or bad. Context is a critical factor in determining whether AI can support fact-based decision making at all. That’s why we formulated this statement. Looking at AI in practice confirms how accurate it is.

Speaking of practice: The new product features introduced in this keynote are based on the new Process Intelligence Graph, correct?

Yes. We often say that with the PI Graph, we have created a new common language – for IT and business – that allows us to articulate how a process works.

The fascinating thing about these language models is that they operate using natural language. And that is quite remarkable—orchestrating and operating a system simply by describing it in natural language. The logical consequence is that by expressing a process and everything it involves in a language that all stakeholders understand, we can meaningfully leverage AI to assist us. For example, this applies to processes in accounting or production. The right wording enables us to efficiently apply AI-driven solutions.

Is this PI Graph static, only changing occasionally, or does it constantly evolve?

It is both. It has a “core framework.” Invoices in finance, for instance, consist of fundamental building blocks that do not change. What happens with them, however, can be very dynamic.

Take BMW as an example. The company’s production lines have changed significantly due to electromobility. Of course, production orders are still issued. But the process associated with a production order for electric vehicles has naturally evolved compared to that for combustion-engine vehicles. The integration of new suppliers also leads to changes. I would say the objects themselves are relatively static, but what happens to them—the so-called events that allow us to reconstruct the process—is highly dynamic.

The data itself is not static. Thanks to live system integration, we are always up to date and have real-time access to the current state of processes. However, the entire PI Graph is freely modelable, even though the core objects remain the same. Naturally, the PI Graph differs in detail for each customer. That’s no surprise—after all, from an external perspective, the production lines of BMW and Pfizer have little in common.

I assume the new feature AgentC in the Celonis platform is also built on this technology?

That’s absolutely right. AgentC is part of our AI portfolio, which is based on the PI Graph. However, AgentC is also characterized by its strong integration into our evolving ecosystem.

Communities already exist, such as the one around Microsoft and its Copilot Studio development environment. With AgentC, we integrate such third-party applications into our Celonis platform.

What are the results of this?

The result is that we can enable different AI applications to communicate with Celonis and retrieve process-related information. Through an API, we can access process contexts, such as determining whether an express delivery is necessary or if standard delivery suffices.

So can we expect an ecosystem of developers to create AI apps based on AgentC APIs?

That is already happening, as we demonstrated at Celosphere 2024. Our own Copilot follows a modular principle that allows the creation of such expert systems—experts for procurement, truck loading, and so on.

These experts are granted access to specific subprocesses and can interact with them. This kind of implementation is already possible today with Celonis technology and our technology partners.

A question about the language models used: Can I choose freely which LLM I use? Do I have the choice between GPT and open-source models like Llama?

Theoretically, yes, but practically, I wouldn’t recommend it at the moment. In reality, the only model that is currently industrially applicable is ChatGPT—and that is not necessarily due to model quality or performance, but because of tool support and infrastructure maturity, such as Azure OpenAI. This could change quickly, of course. Celonis is flexible in this regard.

While all usable models are language models, they operate differently. A prompt optimized for a ChatGPT model won’t work equally well with Llama. This also applies to internal reasoning. Different LLMs have distinct strengths for different use cases.

For the future, this means it is conceivable to implement multiple interacting copilots—one expert may use ChatGPT, another may use Llama, and yet another may rely on Anthropic/Claude or another LLM.

But the actual prompting happens outside of Celonis?

That depends. For the copilots configured in Celonis, Celonis also handles the prompting. A copilot is essentially just a chat interface. The “expert” is created by accessing the right data, which is defined through configuration—specifying which part of the process it should access.

Additionally, the role of the system prompt is defined: How should it behave? Which audience should it interact with? We can precisely configure such an optimized expert system for each specific use case.

Does the PI Graph serve as a knowledge base for so-called Retrieval-Augmented Generation (RAG)?

Yes, that is one technique for accessing information. But there are various interaction possibilities. Another interaction could be setting trigger points to define when the copilot should start.

We define a signal in Celonis that we can calculate. When this alert is triggered, we pass the use case to the AI system for further processing.

One must not forget that these systems operate probabilistically—they behave slightly differently even when executing identical prompts multiple times. That’s why it is crucial to integrate these outputs into monitoring and treat their statements as if they were coming from human teams. This allows adjustments as needed, including involving additional agents in the “discussion.”

Indeed, we see this approach being used by our customers in practice when optimizing processes. In simple terms, there is now an additional—albeit virtual—team working on a subprocess to achieve a specific outcome, such as increased customer satisfaction.

A very popular topic at Celosphere was Object-Centric Process Mining – the demand for sessions on this new data modeling approach was enormous. How long will it take before this type of process mining becomes standard?

Ah, that’s a good question. At the very least, we can already see that Object-Centric Process Mining has become a must-have for all our customers. Anyone starting with process mining today is using it.

However, we have a large customer base, so it will take time before it is fully rolled out across all process mining activities. We now consider Object-Centric Process Mining the de facto standard because there’s no reason to use an earlier version of the technology.

Editor’s note: This interview was originally published on D1g1tal Agenda and was republished with permission. It has been translated from the original German.

bill detwiler author headshot celosphere 2022 1024x1024
Bill Detwiler
Senior Communications Strategist and Editor Celonis Blog

Bill Detwiler is Senior Communications Strategist and Editor of the Celonis blog. He is the former Editor in Chief of TechRepublic, where he hosted the Dynamic Developer podcast and Cracking Open, CNET’s popular online show. Bill is an award-winning journalist, who’s covered the tech industry for more than two decades. Prior to his career in the software industry and tech media, he was an IT professional in the social research and energy industries.

Dear visitor, you're using an outdated browser. Parts of this website will not work correctly. For a better experience, update or change your browser.