Skip to content

Scaling Decisions: How AI Can Complement Your Team's Judgment

Roger Mitchell |

A few days ago when asking whether your systems are working for you or vice versa, I gave you an easy trick to use, which is to ask the following question:

“Does this step truly require a person’s judgment, or are we just using them to connect data?”

I firmly believe that optimizing for this question is the first step that most organizations need to take to ensure their systems are working for their people.

However, there are cases where your people are thinly spread across many initiatives as your organization scales, which begs the question “how do we solve that problem?”

With the rise of generative AI and the increasing accessibility, we can complement and supplement a person’s judgment within your business processes.

Here’s how it works:

  1. Dive deeper into how a judgment call is made with your people
  2. Create prompts and test with AI that include how the judgment call is made along with any supporting data
  3. Operationalize it with your automation platform to record the prompt and response from AI

Let’s unpack each of these steps to dig into what is required to make this possible.

Dive deeper into how a judgment call is made
Our intent here is to learn more about the decision making process that your people are currently doing when they make a judgment call. Some of the questions you’ll want to ask are:

  • What do you look at when you’re making a decision?
  • Do you need to reach out to anyone to gain additional context?
  • If you’re split on what to do, how do you land on a single outcome?

After you have answers to these questions, walk through historical or live examples to understand whether there are any other pieces of information that influence the process that were missed.

Your goal here is not to aim for perfection, rather a solid foundation on which you can start the next step.

Create prompts and test with AI
Once there’s sufficient understanding of how a person makes a judgment call, we can compose prompts that provide context and data, then test those with your generative AI vendor of choice.

Our goal is to ensure that the prompt has sufficient context about the:

  • Role of AI in this situation
  • Process that AI should take to make a decision
  • Data is available to support the decision making process

Optionally, we can include instructions about how to format its response, which is especially helpful if the intent is to perform a comparative analysis between the judgment calls made by your people versus what AI decided.

Operationalize it with your automation platform
After we feel confident in your prompts, it’s time to get this integrated into the business process alongside where your people are making and recording their decisions.

Between native features and third-party automation platforms (e.g. Zapier, Make, Power Automate), there is sufficient likelihood that we can ensure the prompt and response loop to generative AI can occur at a similar time as a person’s judgment call.

Plus, these platforms also make it easy to persist the prompt and response so you can analyze the effectiveness of the prompt, AI vendor and its model, and compare its response to the judgment call made by your people.

TLDR: You can start to use generative AI to complement the judgment calls made by your people at various points in your business process, then compare those outcomes if your intent is to free up your people from certain steps.

Share this post