MIT Sloan Management Review

AI can help leaders make better decisions, but only when they stop treating every problem like the same AI use case.
Calibrate AI Use to the Decision at Hand
Author: Pedro Amorim, Amr Saleh, and Ulrika Cederskog Sundling
On a rainy Tuesday in London, the leadership team of a consumer goods company reviewed two business decisions: “Where should we open our next five stores?” and “Should we pivot the brand toward wellness?” Generative AI had been used to support the decision-making process for addressing both questions. The team ended up with plenty of plausible qualitative arguments for the proposed road map for store expansion — without data or analytics to support these recommendations. The tool had helped the team produce a polished narrative on the wellness pivot, along with a compelling deck advocating the strategic move, but stakeholder engagement was shallow, and there wasn’t a shared conviction that the organization was ready to move.
The meeting exposed the flawed assumption that all AI is the same and that every type of artificial intelligence supports decision-making equally. In reality, different decisions require fundamentally different AI roles. Some decisions are narrow: Objectives are clear, data is available, and outcomes can be measured quickly. Others are wide: Goals are contested, information is incomplete, and alignment matters as much as analysis. When leaders treat both decision types as the same, they predictably misapply AI technology, using generative tools where analytical engines are needed or where the real work is deliberation and commitment. The result is a disappointing output that fails to support narrow decisions and fragile buy-in and difficult execution for wide decisions that demand socialization and alignment.
That mismatch is showing up across industries. AI adoption is now widespread, yet many organizations still struggle to convert AI activity into measurable business impact. In its 2025 report on the state of AI, McKinsey describes this gap starkly: 88% of companies now use AI in at least one function, but only around 40% are able to see a positive impact on the bottom line. In our work with executive teams, the pattern behind that gap is consistent. The pressure to use AI — amplified by headlines about generative and agentic systems — often outruns the harder discipline of deciding where AI should lead, where it should support, and what kind of AI fits the decision at hand. As a result, teams build impressive decks for problems that require more time for internal alignment, and they use conversational generative tools for decisions that demand rigorous analytics.
The solution is to calibrate AI’s role in the decision. By distinguishing between narrow and wide decisions, organizations can match the technology’s capabilities to decision characteristics — allowing AI to act as a decision engine when the decision is narrow and as a decision helper when the decision is wide.
Narrow or Wide?
An AI capability can play very different roles, depending on the decision it’s supporting. Analytical AI and generative AI can each create value across many contexts, but their effectiveness depends on matching them to the characteristics of the decision. Narrow decisions, such as where to open the next store, have clear objectives, usable input data, and fast feedback loops. Wide decisions, like evaluating a brand repositioning toward wellness, are multicriteria, messy, and politically charged.
Importantly, this is not a binary classification. Many leadership decisions are portfolios: A wide, strategic choice often contains narrow subdecisions that can be modeled and optimized. A brand pivot (wide) may include narrow components, such as message testing, media-mix optimization, pricing experiments, and demand forecasting. Each type of question is typically better suited to a particular AI approach.
Narrow decisions are the familiar territory of analytical AI. They are well-defined problems where objectives can be specified, the space of possibilities can be modeled, and performance metrics are clear. Forecasting demand, detecting fraud, optimizing delivery routes, and choosing store locations are classic examples. In these domains, analytical AI — optimization, prediction, and causal modeling — can evaluate patterns and trade-offs at a scale and speed that humans cannot match.
In making narrow decisions, AI serves as a precise and tireless decision engine while decision makers focus on setting the objective correctly, providing high-quality inputs, stress-testing assumptions, and defining guardrails (such as constraints, thresholds, and exception rules). Managers must supervise how the system learns, how it performs under changing conditions, and what happens when reality diverges from the model.
Recent research shows how generative AI can complement analytical AI in narrow contexts: as an accelerator around the analytical core. It can help clarify problem framing, document data logic, and translate technical outputs into business language. Generative AI also helps to capture tacit operational knowledge that is often not documented in manuals or data sets. Through dialogue, examples, and iteration, it can make the hidden layer of human expertise easier to extract and reuse — while the analytical model remains responsible for the decision logic itself.
Wide decisions are characterized by ambiguity. They typically involve competing priorities, evolving information about risk and success factors, and the need for alignment. Objectives are rarely singular; they typically combine financial, strategic, ethical, and political considerations. Entering a new market, repositioning a brand, redesigning an organizational structure, or navigating regulatory uncertainty are common examples.
In wide decisions, AI’s role must be carefully calibrated. Generative AI can help leaders synthesize diverse inputs, surface assumptions, frame scenarios, and articulate trade-offs in a way that makes the decision space more legible. Agentic AI can extend that support when it is designed as a goal-directed workflow that uses tools (search, retrieval, and analysis) to gather and organize material — with explicit checkpoints, traceability, and human review.
Managers using generative AI as described above must take care not to mistake fluency for understanding: A system can produce persuasive narratives while missing context, embedding hidden assumptions, or overweighting unreliable sources. Leaders should therefore treat AI as an amplifier of perspective rather than an authority. The best teams design their process so that AI helps people reason together — by broadening the evidence base and clarifying trade-offs — and avoid outsourcing judgment or commitment.
Getting Practical About Approach
Every function contains a mix of decisions that can be modeled and are measurable and decisions that are ambiguous and alignment-dependent. Leaders should be able to distinguish a narrow decision from a wide one by applying a small set of diagnostic criteria. The goal isn’t to establish a perfect classification; it’s to set a useful calibration: How much of this decision can be formalized, measured, and iterated, and how much depends on judgment, values, and organizational commitment?
A practical rule of thumb is to treat the diagnostic as a scorecard. Consider the questions beside each of the diagnostic criteria listed below. If most of your answers are “yes,” the decision likely sits closer to narrow, and analytical AI can serve as an engine generating specific recommendations. If “no” dominates, the decision is closer to wide, and AI is better suited for seeking evidence, surfacing assumptions, and informing collective judgment.
- Objective clarity: Is the goal crisp and quantifiable (not just directionally appealing)?
- Data readiness: Do we have relevant, reliable, reusable data — not just anecdotes?
- Causal stability: Will historical relationships likely hold over the decision horizon?
- Boundary transparency: Are the boundaries of the problem codifiable, or mostly contextual/political?
- Feedback loop: Can we observe outcomes quickly and incorporate them into the next decision cycle?
- Reversibility: Can we reverse or iterate this process cheaply, or is it a one-way street?
Most important, managers should use those diagnostic questions to spot hybrid decisions. Many wide, strategic decisions contain narrow components that can be informed by analytical AI. When the overall decision is identified as wide, they should ask, “Which subdecisions inside it score narrow?” Those are candidates for analytical models, optimization, experimentation, and automated workflows.
Matching the Tool to the Objective
Two recent cases that we’ve worked with illustrate what changes when organizations calibrate AI’s role to the decision.
Churn prevention in retail banking. Choosing which customers to target in a retention campaign is a narrow decision: The objective is measurable, the data is typically abundant, and feedback loops are relatively fast. A European retail bank wanted to reduce attrition among high-value customers and approached the problem with analytical AI. A predictive model estimated each customer’s likelihood of churning within 90 days and triggered next-best actions calibrated to risk and customer value. The system was built to do what narrow decisions demand: convert structured signals into recommendations that can be monitored, tested, and improved.
Generative AI complemented the analytical core without displacing it. It translated unstructured customer signals — such as contact-center interactions and complaint narratives — into structured inputs that enriched the analytical model, and it drafted outreach scripts tailored to customer segments and likely pain points. It also summarized the drivers behind each recommendation so that front-line agents could act quickly and with context.
Organizational redesign at a global insurer. Organizational redesign sits at the other end of the spectrum: It’s a wide decision due to a combination of ambiguity, competing priorities, and political constraints. A multinational insurer was considering shifting from a product-centric structure to a customer segment structure. The decision would cascade into reporting lines, incentives, technology investments, and cultural identity. It involved deciding which trade-offs the organization was willing to make and then building a commitment to live with them.
In this case, generative tools synthesized internal context — the strategic storyline leaders had been telling, how the organization described past reorganizations, and what had actually broken or worked — alongside external evidence, including relevant industry cases and organizational design and change management research. The value came from making it easier to reason about the choice: AI helped the team articulate a small set of coherent scenarios, anticipate second-order effects on stakeholders, and surface tensions between the stated strategy and the company’s current resource-allocation patterns.
Six Steps for AI-Supported Decision-Making
In our work with executive teams, we consistently see the same pattern. Once leaders distinguish narrow decisions from wide ones, they stop talking about AI at the conceptual level and start explicitly deciding how AI will be used in the decision process — and what will change as a result. Teams move faster on problems that are measurable, and they stop expecting automation to substitute for commitment where the decision is inherently political, multicriteria, or irreversible.
Leaders who want to see AI more broadly applied as a decision-support tool in their organizations can take the following steps:
1. Inventory critical decisions. List the top 20 decisions (of the organization or function) and classify each as narrow or wide using the six-question diagnostic.
2. Treat the framework as a portfolio. Fund a small number of narrow bets that can show measurable impact quickly, and stop “automation” pilots that are actually wide decisions in disguise.
3. Stand up two playbooks. For narrow decisions, outline the analytics life cycle (data, modeling, monitoring, and exception handling). For wide decisions, establish a deliberation protocol (evidence standards, explicit assumptions, and verification checkpoints).
4. Wire GenAI differently. Use it as an accelerator around narrow models (such as documentation and feature extraction) and as a synthesis partner for wide decisions (such as scenario building and pre-mortems) — with traceability and review.
5. Instrument the decision. For narrow decisions, track accuracy, drift, and operational impact. For wide decisions, log prompts, sources, rationales, and decision checkpoints so that reasoning is inspectable and repeatable.
6. Close the loop. Run post-decision reviews: Did the narrow engine hit targets and remain stable? Did the wide process surface key risks, real options, and the trade-offs that were at stake?
The message can stay simple: AI is not a monolithic technology but one that comes in different flavors with different capabilities. Gaining measurable benefits from it requires that leaders use the appropriate tool for the job at hand.
Credits: TCA, LLC.