The Difference Between a Steering Committee and a Task Force (And Why It Matters for AI)

Featured image for The Difference Between a Steering Committee and a Task Force (And Why It Matters for AI)

The Wrong Body Making the Wrong Call

Most AI initiatives at AEC firms stall because the wrong body is making the wrong kind of decision. Architectural decisions — what we'll commit to, what the operating model looks like — belong on a standing steering committee. Engineering decisions — how to make this specific thing work — belong on a temporary task force. AEC firms already use this distinction on every project they ship. Apply it to AI governance and most stalled initiatives unstall.

You already do this work. An architect of record owns the system. An engineer of record owns the implementation inside that system. When the lines blur, the project drifts and the rework piles up. The same logic applies to AI governance, and most firms haven't drawn the line yet.

"Most AI pilots that stall at AEC firms aren't stalled by tooling. They're stalled because architecture-level questions ended up in a task force, or engineering-level questions got escalated to a steering committee."

This is a thinking problem, not a tooling problem. Firms buy the platform. Firms hire the consultants. And then a working group of three people quietly decides what data the model can see, or a steering committee gets pulled into debating which Chrome extension to pilot. The decision class doesn't match the decision body. Pilots stall. Trust erodes. And the firm pays for it twice— once in wasted spend, once in the slower start the second time around.

A solid AI strategy for founder-led firms starts here, with who decides what. Before we map the AI version, we need to define the underlying distinction.

What Architectural Decisions Are (and Aren't)

The difference between architecture and engineering comes down to reversibility. Architectural decisions address requirements that are hard or costly to reverse and affect the system as a whole1. Engineering decisions address implementation within that already-decided architecture. They're scoped, changeable, and contained.

The test isn't difficulty. It's reversibility. If you can change your mind in two weeks without unwinding other commitments, it's an engineering decision. If reversal would cascade across the firm, it's architectural.

Jeff Bezos's framing is useful here: one-way doors versus two-way doors. A one-way door is hard to walk back through once you've passed. A two-way door lets you turn around. Empirical work in software architecture2 reinforces the same dividing line: cost of change is the operative criterion. Architecture is design— but not all design is architectural.

Concrete examples (drawing the line in normal firm life):

  • Architecture: choosing the structural system for a 20-story tower; selecting the firm-wide BIM platform; settling the contract template for design-build delivery
  • Engineering: detailing a single moment connection; configuring a sheet template inside the chosen BIM platform; redlining one project's scope

Software teams write Architectural Decision Records (ADRs) precisely because mixing these two classes of decision causes expensive rework. AEC firms have known this for a century. Now apply the same lens to organizational decision-making.

Steering Committee vs. Task Force, Defined

A steering committee is a standing advisory body that owns strategic direction, policy, and budget approval3. A task force is a temporary cross-functional group formed to deliver a specific outcome and dissolved when the work is done4. One is built to last. The other is built to ship.

"A steering committee owns architecture-level decisions. A task force owns engineering-level decisions. Conflate them, and your firm gets bureaucracy on the small calls and improvisation on the big ones."

Project-level steering committees can be temporary— tied to one project's lifecycle— while portfolio-level committees are permanent3. For AI, you almost always want the portfolio variant: AI is not one project. It's a class of decisions the firm will keep making for the next decade.

DimensionSteering CommitteeTask Force
DurationStandingTemporary (60–120 days)
Decision classArchitectureEngineering
AuthorityApproval (policy, platform, budget)Recommendation within approved policy
OutputStanding decisions, policySpecific deliverable
Dissolves whenDoesn't— re-chartersDeliverable shipped

The composition differs too, and we'll get to that. But the critical point first: these two bodies map directly onto the two decision classes. Committee owns the architecture. Task force owns the engineering inside it. With definitions in place, here's the AI decision map.

Which AI Decisions Belong Where: The Decision-Rights Matrix

Architecture-level AI decisions— policy, platform, data access rules, risk thresholds, vendor contracts— belong on the steering committee. Engineering-level AI decisions— pilot scoping, tool evaluation against approved policy, workflow redesign for one practice area— belong in a task force.

If reversing the decision would cascade across practice groups, it belongs on the steering committee. If reversal stays inside one project or one team, it belongs in a task force. When the call sits in the gray zone, escalate to architecture-mode. An over-careful committee review costs a meeting. An under-careful task force precedent costs a year.

The matrix below is sized for a $20M–$100M AEC firm. Bigger firms add rows; smaller firms collapse them. The decision classes don't change.

AI DecisionBodyWhy
AI use policy (what's allowed, what's banned)Steering CommitteeArchitecture; firm-wide; hard to reverse
Approved AI platform/vendorSteering CommitteeArchitecture; data lock-in
Data access and confidentiality rulesSteering CommitteeArchitecture; client and legal exposure
Risk thresholds (when human review is required)Steering CommitteeArchitecture; liability surface
Adding a new approved vendorSteering CommitteeArchitecture; precedent-setting
Changing how AI outputs are reviewedSteering CommitteeArchitecture; QA system-wide
Pilot scoping for one practice areaTask ForceEngineering; reversible
Tool evaluation within an approved platformTask ForceEngineering; configurable
Workflow redesign for one project typeTask ForceEngineering; contained
Training curriculum for one teamTask ForceEngineering; iterable

This is what an AI decision framework for founders looks like in practice. The matrix is the centerpiece. Knowing what each body decides means knowing who sits on each.

Who Sits on Each (Composition)

An AI steering committee at a $20M–$100M AEC firm typically includes the executive sponsor, a practice or service-line leader, IT or security leadership, and a legal or risk advisor. A task force pulls 3–5 people for the duration of one specific project— usually a project lead, a hands-on practitioner, and a steering-committee liaison.

Industry guidance on AI governance committees converges on the same role classes: Legal, Ethics and Compliance, Privacy, Information Security and Architecture, Research and Development, and Product Management5. Translated into AEC vocabulary: legal counsel, IT/security, a senior project delivery leader, and an executive sponsor.

"AI governance committees typically include Legal, Privacy, Security, Operations, and practice-area leadership. The size scales with the firm— but the function classes don't."

Right-sizing for the $20M–$100M range:

  • Steering committee: 5–7 members. Meets monthly or quarterly. Owns policy, platform, and the approved-vendor list.
  • Task force: 3–5 members. Meets weekly while active. Owns one scoped pilot with a 90-day end date and a named deliverable.

Building an AI culture inside the firm starts with naming who's accountable. Composition is half of that. The other half is what the bodies actually decide. When firms get composition right but conflate the two bodies anyway, here's what breaks.

Failure Modes: When the Wrong Body Owns the Wrong Call

Three patterns predict an AI initiative will stall: a task force trying to set firm-wide policy, a steering committee debating tool configurations, and a "committee" with no charter making no decisions at all.

  1. Task force asked to set policy. When a task force is asked to set firm-wide AI policy, it either stalls (no authority) or sets shadow precedent the firm has to unwind later. The data-access rules invented by a three-person pilot team somehow become the firm's defaults— until someone in legal notices. The fix is reassigning the decision class, not the people.
  1. Steering committee debating tool configs. When a standing committee gets pulled into engineering-level questions— which prompt template, which Chrome extension, which vendor's free trial— the committee becomes a bottleneck. Pilots wait weeks for approvals that should take hours. Per AI governance strategy best practice, engineering-mode work belongs in a task force operating within already-approved policy.
  1. "Committee" with no charter. The data backs this up. Only 27% of boards have written AI governance into committee charters6— even though 62% now hold regular AI discussions. Just 28% of CEOs and 17% of boards take direct responsibility for AI governance7. And while 55% of organizations claim an AI committee8, most are theater, not governance.

"A meeting is not a body. A body has a charter."

The fix in each case is mechanical. Reassign the decision, not the org chart. Here's a 90-day setup.

A 90-Day Setup Playbook (and the Smallest Viable Version)

A workable AI governance setup for a $20M–$100M AEC firm takes about 90 days. Thirty days to charter the steering committee. Thirty days to ratify policy and the approved platform list. Thirty days to charter the first task force against an approved use case.

  1. Days 1–30: Charter the steering committee. Name the 5–7 members. Write the decision rights down— not in a deck, in a one-page charter. Set cadence (monthly or quarterly). NACD's research is unambiguous on the importance of writing it down6: meetings without charters drift.
  2. Days 31–60: Ratify firm-wide policy and approved platforms. Use, data, vendor, and risk-threshold rules go on paper. The committee owns this end-to-end— use-case approval, risk evaluation, vendor selection, policy enforcement, and performance oversight9.
  3. Days 61–90: Charter the first task force. Pick one approved use case (e.g., automated specification review for one practice group). Name 3–5 members. Set a 90-day deliverable and an explicit end date. When it ships, dissolve.

The smallest viable governance is one body, two modes— architecture-mode quarterly, engineering-mode weekly. Different agendas. Same people.

For firms allergic to standing committees, this is the concession. One body. Two cadences. Quarterly meetings handle the architecture work (policy, platform, vendor). Weekly check-ins handle the engineering work (pilot status, tool evaluation, workflow redesign). Just because something is easy doesn't mean it's good— but bureaucracy for its own sake isn't good either. Right-sized governance is.

Why AI specifically? AI introduces architecture-level decisions AEC firms haven't faced before: data egress to third-party models, model-output liability, intellectual property exposure across project teams, and confidentiality rules that didn't exist five years ago. These aren't tool choices. They're firm-wide policies. The architecture/engineering distinction is the cleanest framework for handling them. Watch for the hidden costs of AI projects that show up when these calls land in the wrong body.

If your firm is staring at this list and unsure where to start, this is exactly the kind of work Dan Cumberland Labs does with AEC firms in the $20M–$100M range— drafting the steering committee charter, naming the first task force, and mapping which decisions belong where. Peer-to-peer. No vendor pitch.

FAQ

Should our firm have an AI steering committee or an AI task force? Both— they own different decision classes. The steering committee sets firm-wide policy, platform, and risk thresholds. The task force runs scoped pilots within that policy. One body, one decision class.

Who should sit on an AI steering committee at a $20M–$100M AEC firm? The executive sponsor, a practice or service-line leader, IT or security leadership, and a legal or risk advisor. Five to seven people total. Industry guidance converges on Legal, Privacy, Security, Operations, and practice leadership59.

How long should an AI task force last? Until the deliverable is shipped— typically 60 to 120 days. Standing problems belong on the steering committee, not in a task force4. When the deliverable ships, the task force dissolves.

What's the irreversibility test that separates architecture from engineering decisions? If reversing the decision would cascade across practice groups or take more than two weeks to unwind, it's architectural1. If reversal stays inside one project or one team, it's engineering. Reversibility, not difficulty, is the test.

How widespread are AI governance committees? 55% of organizations report having an AI oversight committee8. Only 27% have formally written AI governance into committee charters6. The other 73% have a meeting, not a body.

Governance maturity isn't measured by how many meetings you hold. It's measured by whether the right body owned the last 10 decisions you made.

Pull up your last quarter. List the AI calls your firm made— vendor selection, pilot scoping, data-access rules, tool approval. Mark each one architecture or engineering. Then check who decided. If the bodies don't match the decision classes, you've found the source of the stall.

Pick the right body, and the right body picks the right call.

References

  1. Wikipedia, "Architectural decision" (2024) — https://en.wikipedia.org/wiki/Architectural_decision
  2. Jansen, A. et al., "Software Architecture Decision-Making Practices and Challenges" (2016) — https://arxiv.org/pdf/1610.09240
  3. ProjectManager.com, "Steering Committee: Definition, Roles & Meeting Tips" (2024) — https://www.projectmanager.com/blog/steering-committee-definition
  4. Jade Rubick, "Task Forces can solve (some of) your cross-functional challenges" (2023) — https://www.rubick.com/task-forces/
  5. OneTrust, "Establishing an AI Governance Committee: An Inside Look at OneTrust's Process" (2024) — https://www.onetrust.com/blog/establishing-an-ai-governance-committee-an-inside-look-at-onetrusts-process/
  6. NACD, "Director Essentials: Implementing AI Governance" (2025) — https://www.nacdonline.org/all-governance/governance-resources/governance-research/director-faqs-and-essentials/implementing-ai-governance/
  7. McKinsey & Company, "The state of AI: How organizations are rewiring to capture value" (March 2025) — https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-how-organizations-are-rewiring-to-capture-value
  8. Gartner, "Toolkit: Setting Up a Committee Charter for AI Governance" (2025) — https://www.gartner.com/en/documents/6186255
  9. Xantrion, "Developing Your AI Strategy: A Steering Committee Is The Foundation" (2024) — https://www.xantrion.com/article/developing-your-ai-strategy-a-steering-committee-is-the-foundation

Our blog

Latest blog posts

Tool and strategies modern teams need to help their companies grow.

View all posts