# How to Set 3 AI Goals for Your Next Fiscal Year

**By Dan Cumberland** · Published May 11, 2026 · Categories: AI Strategy

> Almost every company now uses AI in some form, but only a minority can show it in the P&L, and most pilots die quietly somewhere between the demo and...

## Why Most AI Initiatives Never Show Up in the Numbers

Almost every company now uses AI in some form, but only a minority can show it in the P&L, and most pilots die quietly somewhere between the demo and production\.  The winning firms did something specific: they picked a few outcome goals, gave each one an owner and a number, and redesigned the work around them\.

The data is sobering\.  McKinsey's *State of AI in 2025*[1](/blog/blog-next-gen-architecture#ref-1) found that 88% of organizations now use AI in at least one business function, up from 78% the year before\.  But only about 39% report any measurable effect on enterprise EBIT \(a measurable contribution to enterprise profit\), and most of those say AI accounts for less than 5% of it\.  Adoption is nearly universal\.  Bottom\-line impact is rare\.

And the pilots?  They mostly stall\.

- An MIT NANDA report[4](/blog/blog-next-gen-architecture#ref-4) found roughly 95% of enterprise generative AI pilots delivered no measurable impact on profit and loss, with only about 5% reaching rapid revenue acceleration\.
- Gartner predicted at least 30% of generative AI projects would be abandoned after proof of concept by the end of 2025[2](/blog/blog-next-gen-architecture#ref-2), citing poor data quality, weak risk controls, escalating costs, and unclear business value\.
- Gartner also predicts more than 40% of agentic AI projects will be canceled by the end of 2027[3](/blog/blog-next-gen-architecture#ref-3), for much the same reasons\.

A quick note on those numbers: they're different studies measuring different things\.  Gartner's figure is *projects abandoned after proof of concept*\.  MIT's is *pilots that produced no measurable P&L return*\.  Don't let anyone \(including yourself, in the planning meeting\) flatten them into a single "most AI projects fail" stat\.  The precise framing is the point\.

You've probably lived the pattern\.  You ran a pilot, it demoed well, nothing moved, and now it's planning season again\.  If that's the failure mode, the fix starts with how you write the goal in the first place\.

## Outcome Goals vs\. Activity Goals: The Reframe

An AI goal is only useful if it names a business outcome and a number, not an activity\.  "Deploy AI across five teams" is an activity goal\.  "Cut proposal turnaround from nine days to four using AI drafting" is an outcome goal\.  Outcome goals are the ones that show up in the P&L; activity goals are how pilots quietly stall\.

The distinction matters because of *what an outcome goal forces you to do*\.  McKinsey's data[1](/blog/blog-next-gen-architecture#ref-1) shows fundamental workflow redesign has the highest correlation with bottom\-line impact, yet only about 21% of organizations using generative AI have redesigned any workflows at all\.  An activity goal \("roll out the tool"\) never forces that redesign\.  An outcome goal \("cut the turnaround in half"\) does, because you can't hit the number without changing the work\.  Practitioner OKR guidance lands in the same place: measure the result, not the usage[6](/blog/blog-next-gen-architecture#ref-6)\.

```html-table
<table><thead><tr><th>Activity goal (measures effort)</th><th>Outcome goal (measures result)</th></tr></thead><tbody><tr><td>"Deploy AI across five teams"</td><td>"Cut proposal turnaround from nine days to four using AI drafting, owned by Ops, reviewed monthly"</td></tr><tr><td>"Run an AI pilot in finance"</td><td>"Reduce month-end close from ten days to six, owned by the Controller, reviewed quarterly"</td></tr><tr><td>"Give clients an AI assistant"</td><td>"Cut average RFI response time from three days to one, owned by the project delivery lead, reviewed monthly"</td></tr></tbody></table>
```

Here's the reframe: you don't need an AI strategy with twenty initiatives\.  You need three AI goals you'll actually finish\.  Keep the distinction clean: *goals* are committed, owned, measured, and in the operating plan; *experiments* are cheap, time\-boxed, and carry no headcount\.  Run as many experiments as you want\.  Commit to about three goals\.

Format doesn't matter much\.  OKRs work\.  SMART goals work\.  What matters is an outcome metric, a named owner, and a recurring check\-in\.  And keep the goals tool\-agnostic: ChatGPT, Claude, Gemini, whatever your team already uses\.  This is where [an AI strategy engagement](/services/ai-strategy) earns its keep: the heart of the work is exactly this triage, turning a wishlist into three goals with shape\.

With that distinction in hand, here's the first of the three goals: the one most firms can move fastest\.

## Goal 1 — The Efficiency Goal: Take Time or Cost Out of a Core Workflow

Your first AI goal should remove measurable time or cost from one workflow you run constantly, the kind of work that eats senior people's hours without differentiating you\.  Pick the workflow, measure a baseline, target a specific reduction, and put the leader of that function on the hook for it\.

For a professional\-services or AEC firm, the candidates are usually obvious once you look:

- **Proposal and RFP turnaround time:** first\-draft assembly, boilerplate, tailoring to the client
- **Spec and drawing review or QA cycles:** flagging gaps, inconsistencies, missing items
- **Project documentation and meeting\-note capture:** minutes, requests for information \(RFIs\) logged, status summaries
- **Knowledge capture from senior staff:** so a junior isn't blocked waiting on the one person who knows

Now make it a real goal\.  The rewrite looks like this:

> ❌ "Deploy AI in five teams\." ✅ "Cut proposal turnaround from nine days to four using AI drafting, owned by the Director of Operations, reviewed monthly\."

That's the single most useful move in this whole article\.  The first version is a press release\.  The second is something the board can track and the owner can be held to\.

Why does the efficiency goal usually move fastest?  Because it points at work you control end to end, and because the evidence says that's where the return is\.  McKinsey's data[1](/blog/blog-next-gen-architecture#ref-1) ties bottom\-line impact to *redesigning* the workflow, not bolting AI onto it\.  The MIT report[4](/blog/blog-next-gen-architecture#ref-4) found the biggest ROI showed up in back\-office automation \(cutting external agency and outsourcing costs\), even though more than half of generative AI budgets go to sales and marketing tools\.  Most firms are aiming the money at the wrong target\.

One caution, which sets up a trap later: redesign the workflow, don't layer AI on a broken one\.  If your proposal process is nine days because three people sign off in sequence and two of them are always traveling, a drafting tool won't save you\.  Fix the process, then add the AI\.  If you're unsure which workflow to pick, [a simple framework for deciding which AI investments are worth making](/blog/ai-decision-framework-founders) helps you rank the candidates before you commit\.

Efficiency buys you room\.  The second goal is about using that room to win or serve better\.

## Goal 2 — The Growth\-or\-Quality Goal: Win More Work, or Serve Clients Better

Your second AI goal should move a number on the growth side of the business: more proposals won, faster client response, or capacity to take on more projects without adding headcount\.  It's still an outcome goal with a baseline and an owner\.  The difference is it points at revenue or client experience instead of cost\.  For most firms that side of the business is exactly where AI hasn't shown up yet[1](/blog/blog-next-gen-architecture#ref-1), which is the gap a growth goal is meant to close\.

What this looks like for an AEC or professional\-services firm:

- **Win rate on proposals and RFPs:** faster, sharper, more tailored submissions
- **Client response time on RFIs and change orders:** the responsiveness clients actually remember
- **Capacity per project manager:** more projects per PM, no new hires, using AI for documentation and status reporting
- **Consistency of client communications across the firm:** every PM sounding like the firm, not like themselves on a bad day

Make it measurable the same way the efficiency goal was: baseline, target, named owner, review cadence\.  "Take on 20% more projects this year without adding project managers, using AI for documentation and status reporting" is a growth goal, owned by the operations partner, reviewed quarterly\.  "Get better at AI" is not\.  Neither is "use AI in business development," which is an activity goal in a nicer suit\.

One more, on the client\-experience side: "Cut average RFI response time from three days to one, every project, every PM," owned by the project delivery lead, reviewed monthly\.  That's the goal clients actually feel, and it's the one that shows up in your next round of references\.

Be honest with the board about timing\.  Growth goals can take a quarter or two longer to show movement than efficiency goals: there's a sales cycle, a client cycle, a ramp\.  Say so up front so nobody expects a Q1 miracle\.  This is also the goal that most directly answers the founder's real question \(*why are we spending money on AI?*\) because it points at the top line, not just the cost line\.

Two goals now point at this year's numbers\.  The third makes the first two repeatable\.

## Goal 3 — The Foundation Goal: Data, Literacy, or Governance

Your third AI goal should build the thing that makes goals 1 and 2 repeatable next year: cleaner data, AI literacy across the team, or lightweight governance\.  This goal is measured by leading indicators \(adoption, training completion, data readiness\), not by a P&L line in year one\.  This is the groundwork you do before the climb: not glamorous, and the thing that decides whether next year's goals are even reachable\.  That's the point, not an apology\.

What a foundation goal can be:

- **Data readiness:** clean, accessible project and client data for the workflows in Goals 1 and 2
- **AI literacy:** a baseline level of capability across the team, ongoing, not a one\-off lunch\-and\-learn
- **Lightweight governance:** a short usage policy, an approved\-tools list, a review cadence, not a 40\-page document nobody reads

Why does this earn its own slot instead of being folded into the other two?  The naive version of goal\-setting says every AI goal must be an ROI line\.  But the foundation work has no clean year\-one number, and it determines whether next year's goals are even possible\.  Gartner's list of common failure modes[5](/blog/blog-next-gen-architecture#ref-5) names it directly: no clear business case, poor data readiness, weak governance, treating AI as a tech project, not redesigning workflows\.  Two of those five are foundation problems\.  Skip the foundation goal and you're choosing one of them\.

There's also a quieter reason\.  The MIT report[4](/blog/blog-next-gen-architecture#ref-4) noted that most workers already use personal AI tools while only a minority of firms have official subscriptions: "shadow AI," people running ungoverned experiments on company work because nobody gave them a sanctioned path\.  It's almost certainly happening in your firm right now\.  A foundation goal turns that into a managed capability instead of a liability\.  And [lightweight AI governance](/blog/ai-governance-strategy), not the enterprise version, is enough to do it\.

Both are true here: you can be ambitious about this year's numbers *and* honest that some of the work is groundwork with no immediate payoff\.  All of it matters\.

Three goals on a page aren't a plan yet\.  Here's what turns them into operating\-plan commitments\.

## Make Each Goal Real: Baseline, Owner, Budget, Cadence

Each of the three AI goals needs five things before it goes in the operating plan: a baseline you measured before you started, a target with a date, one named executive owner, a budget line, and a review cadence\.  Without those, you don't have a goal; you have a pilot waiting to stall\.

Every AI goal needs the same five things:

1. **A baseline:** the number today, measured *before* you start, not estimated afterward
2. **A target:** the number you're aiming for, and by when
3. **A named executive owner:** a person, not "IT" or "the AI committee"
4. **A budget line:** what it costs to do, sitting in the plan like any other line item
5. **A review cadence:** a recurring meeting where the owner reports movement against the baseline

The owner point is the one firms get wrong\.  McKinsey found AI high performers are about three times more likely than their peers to have senior leaders visibly own and commit to AI initiatives[1](/blog/blog-next-gen-architecture#ref-1)\.  "Visibly own" means a name, not a department\.  Assigning a named executive to each goal costs nothing and is about as low\-regret a move as exists in this whole exercise\.  And it's the right one to make: no matter the question, people are the answer\.  Pick the function leader who already owns the workflow or the metric\.  Founder\-led firms often [bring in a fractional AI officer](/blog/what-is-a-fractional-ai-officer) to drive this in the first planning cycle, then hand it off\.

On budget: there's no fixed rule, and anyone who quotes you a magic percentage is guessing\.  Budget each of the three goals individually \(tooling, the time to redesign the workflow, training\) the way you'd budget any other initiative\.  Then decide how ambitious you want to be\.

Pull it together and one goal statement looks like this:

> **Goal 1 \(Efficiency\):** Cut proposal turnaround from 9 business days to 4 by Q3, using AI drafting on the first pass\.  Baseline: 9\.2 days, last quarter\.  Owner: Director of Operations\.  Budget: tooling plus six weeks of process redesign\.  Review: monthly ops meeting\.

Do that three times and you have an AI plan\.  For the cadence to mean anything, the owner has to report against the baseline every time, which is its own discipline\.  [How to measure AI success](/blog/measuring-ai-success) gets into what those numbers should actually be\.

Even with the structure right, a few predictable traps can sink the plan\.  Watch for these in the planning conversation\.

## Traps to Avoid in Fiscal\-Year AI Planning

The most common ways AI goals fail in fiscal planning are predictable: setting a dozen "goals" instead of three, layering AI on a broken workflow, pouring budget into sales\-and\-marketing tools while the back office goes untouched, and treating the whole thing as an IT project\.  Each one has a fix\.

1. **Too many goals\.** A twelve\-item AI wishlist becomes pilot purgatory: the state where an AI experiment demos well, never scales to production, and never moves the P&L\.  *Fix:* cap it at three committed goals\.  Everything else is a cheap experiment with no headcount attached\.

1. **Layering AI on broken workflows\.** McKinsey's data[1](/blog/blog-next-gen-architecture#ref-1) is blunt about this: only about 21% of generative AI users have redesigned any workflows, and redesign is what correlates with bottom\-line impact\.  Just because adding a tool is easy doesn't mean it's good\.  *Fix:* fix the workflow first, then add the AI\.

1. **Budget aimed at the shiny tools\.** The MIT report[4](/blog/blog-next-gen-architecture#ref-4) found more than half of generative AI budgets go to sales and marketing tools, while the biggest ROI showed up in back\-office automation\.  *Fix:* weight the efficiency goal toward the unglamorous internal workflow, not the demo\-friendly customer\-facing one\.

1. **Treating it as a tech project\.** Gartner's list of failure modes[5](/blog/blog-next-gen-architecture#ref-5) \(no clear business case, poor data readiness, weak governance, treating AI as a technology project, not redesigning workflows\) is mostly a list of business failures wearing a tech costume\.  *Fix:* a business owner, a business metric, a business review\.  IT supports it; IT doesn't own it\.

The budget trap is also where the meter runs quietly: [the hidden costs of AI projects](/blog/hidden-costs-ai-projects) usually aren't the license fee\.

If choosing and sequencing these three goals feels like more than your leadership team can do alongside running the firm, that's a normal place to get help\.

## Getting the Three Goals Right

Picking the right three AI goals, and writing them so they survive a budget conversation, is exactly the work an AI strategy engagement does with a leadership team during a planning cycle\.  The output isn't a slide deck\.  It's three goals with baselines, owners, and a review cadence you can take straight into the operating plan\.

At [Dan Cumberland Labs](https://dancumberlandlabs.com) that engagement runs as a series of audit conversations: deep\-dive sessions to surface the candidates, then a hit list of opportunities ranked by value, then implementation plans for the top ones\.  You own the plan at the end: build it in\-house, or hand it to anyone you like\.  The point isn't to hand you a fish; it's to leave you able to fish\.

You don't need twenty initiatives\.  You need three goals, and the thinking that picks the right three\.

A few questions that come up every planning season:

## FAQ

### How many AI goals should a company set per year?

Three is enough: one efficiency goal, one growth\-or\-quality goal, and one foundation goal\.  Past that, they stop being commitments and start being experiments competing for the same attention, which is how most of them never get finished\.  Run as many cheap experiments as you like; commit to about three goals\.

### What's the difference between an AI outcome goal and an AI activity goal?

An activity goal measures effort: "deploy AI to five teams\."  An outcome goal measures result: "cut proposal turnaround from nine days to four using AI drafting\."  Outcome goals are the ones that show up in the P&L, because hitting the number forces you to change the work, not just install a tool[6](/blog/blog-next-gen-architecture#ref-6)\.

### Why do most AI pilots fail?

They lack an owner, a metric, and a workflow redesign\.  McKinsey found only about 21% of generative AI users had redesigned any workflows[1](/blog/blog-next-gen-architecture#ref-1), and an MIT report found roughly 95% of enterprise pilots delivered no measurable P&L return[4](/blog/blog-next-gen-architecture#ref-4)\.  The fix is unglamorous: a baseline, a named executive owner, a target, and a recurring review\.

### How much of next year's budget should go to AI?

There's no fixed rule, and a single magic percentage isn't the right way to think about it\.  Budget each of the three goals individually \(tooling, the time to redesign the workflow, training\) the way you'd budget any other initiative, then decide how ambitious you want to be\.  The discipline is in the line items, not a top\-down number\.

### Who should own AI goals?

A named executive per goal, not "IT" in the abstract\.  McKinsey found AI high performers are about three times more likely to have senior leaders visibly own AI initiatives[1](/blog/blog-next-gen-architecture#ref-1)\.  Pick the function leader who already owns the workflow or the metric; founder\-led firms often bring in a fractional AI leader to drive this in the first planning cycle, then hand it off\.

Three goals\.  Owners\.  A baseline\.  A review date\.  Write them down this week\.

## References

1. McKinsey & Company \(QuantumBlack\), "The State of AI in 2025: Agents, innovation, and transformation" \(2025\) — [https://www\.mckinsey\.com/capabilities/quantumblack/our\-insights/the\-state\-of\-ai](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai)
2. Gartner, Inc\., "Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025" \(2024\) — [https://www\.gartner\.com/en/newsroom/press\-releases/2024\-07\-29\-gartner\-predicts\-30\-percent\-of\-generative\-ai\-projects\-will\-be\-abandoned\-after\-proof\-of\-concept\-by\-end\-of\-2025](https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025)
3. Gartner, Inc\., "Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027" \(2025\) — [https://www\.gartner\.com/en/newsroom/press\-releases/2025\-06\-25\-gartner\-predicts\-over\-40\-percent\-of\-agentic\-ai\-projects\-will\-be\-canceled\-by\-end\-of\-2027](https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027)
4. Fortune \(reporting on the MIT NANDA initiative, "The GenAI Divide: State of AI in Business 2025"\), "MIT report: 95% of generative AI pilots at companies are failing" \(2025\) — [https://fortune\.com/2025/08/18/mit\-report\-95\-percent\-generative\-ai\-pilots\-at\-companies\-failing\-cfo/](https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/)
5. Gartner, Inc\., "Why Half of GenAI Projects Fail: Avoid These 5 Common Mistakes" \(2025\) — [https://www\.gartner\.com/en/articles/genai\-project\-failure](https://www.gartner.com/en/articles/genai-project-failure)
6. Worxmate, "OKRs for AI\-Driven Teams: 2026 Guide to Measuring AI Impact" \(2026\) — [https://worxmate\.ai/resources/articles/okrs\-for\-ai\-driven\-teams/](https://worxmate.ai/resources/articles/okrs-for-ai-driven-teams/)


---

Source: https://dancumberlandlabs.com/blog/next-gen-architecture/
