A Civil Firm Ran 11 AI Pilots in One Year. Zero Reached Production. Here's Why.

FINAL — Publication Package

Featured image for A Civil Firm Ran 11 AI Pilots in One Year. Zero Reached Production. Here's Why.

Article Metadata

FieldValue
Slugblog-ai-in-civil-engineering
URL/blog/ai-in-civil-engineering
Primary Keywordai in civil engineering
StatusREADY FOR PRE-PUBLISH HITL
Word Count3,694 (target 3,500–4,200) ✅
Voice Score17/20 ✅
Engagement Score33/35 ✅
Pipeline Date2026-04-25

Content

A Civil Firm Ran 11 AI Pilots in One Year. Zero Reached Production. Here's Why.

By Dan Cumberland | Last updated April 25, 2026

The Pattern Behind 11 Failed Pilots

Eighty-eight percent of AI proof-of-concepts in enterprise— including civil engineering— never reach production. The civil firm in this article's title is composite, not literal. But the pattern behind it is real: AI in civil engineering currently behaves like AI everywhere else, only worse. Of every 33 AI pilots a company launches, only four graduate to production deployment, according to IDC's 2025 CIO Playbook1.

That makes "11 pilots, zero in production" arithmetic, not exception. It's what one year inside the average enterprise pilot funnel actually looks like.

The same pattern shows up in the most rigorous primary research available. MIT's NANDA initiative, in its August 2025 GenAI Divide report, analyzed 300 public AI deployments along with 150 leader interviews and 350 employee surveys2. Their finding: 95% of enterprise generative AI pilots deliver no measurable P&L impact2. RAND Corporation's 2024 study put the headline number at 80% project failure— roughly twice the rate of non-AI IT projects3.

Three numbers worth holding in mind:

88% of AI proof-of-concepts never reach production (IDC, 2025). 95% of enterprise generative AI pilots show no measurable P&L impact (MIT NANDA, 2025). 80%+ of AI projects fail— twice the rate of non-AI IT projects (RAND, 2024).

Each independent study converges on the same diagnosis. AI in civil engineering fails for organizational reasons. Firms treat pilot work as procurement instead of as the start of an operating model change. The technology, in nearly every case, works.

IDC's Ashish Nadkarni put it bluntly4: "Most of these gen AI initiatives are born at the board level... these POCs are highly underfunded or not funded at all."

The pattern is industry-wide. Civil engineering's structural data fragmentation— design data in BIM, project data in ERP, field data still on paper— makes it worse. More on that in the next section. For now, hold one frame: this is a leadership-and-data problem, not a tools problem.

But how does the global pattern actually show up in civil engineering specifically? The data is more uneven than the headlines suggest.

AI in Civil Engineering: The Real Adoption State

AI in civil engineering is a story of uneven adoption. Only 27% of AEC professionals currently use AI in their operations, according to Bluebeam's 2026 AEC Technology Outlook Report5— yet 94% of those who do plan to increase their investment next year5. The split between AI-using and non-AI-using firms is widening.

For comparison, McKinsey's State of AI 20256 reports that 88% of organizations across the broader economy now use AI in at least one function. Civil engineering's 27% sits materially behind that benchmark. The gap is real. And it isn't accidental.

AEC AI Adoption Reality Check

MetricAECCross-Industry
Currently use AI in operations27% (Bluebeam)88% in at least one function (McKinsey)
Plan to increase AI investment94% (of users, AEC)~33% scaling enterprise-wide
Paper still used in design phase52% (Bluebeam)
Paper still used in planning phase49% (Bluebeam)
3-year forward forecast35% expect AI in >50% of design projects

The structural challenge is engineering-specific. Half of AEC firms still print drawings during design7. That's the foundation AI has to integrate with. Generic enterprise AI assumes structured digital data. Civil engineering's data lives in three places at once— BIM, ERP, and paper. No vendor demo solves that on day one.

The forward trajectory is also uneven. Bentley Systems, in joint research with Mott MacDonald, Pinsent Masons, and Turner & Townsend, found that 35% of infrastructure firms expect AI to be used in more than half of their design and engineering projects within three years8. That's a forecast. The firms making that bet today are not yet the majority.

There's a simpler way to read these numbers. AI in civil engineering is bifurcating the industry: 27% of firms are deploying AI and 94% of those plan to invest more, while the other 73% are still in evaluation, pilot, or pre-deployment phases.

If most pilots fail and AEC firms lag in adoption, the central question is mechanical. What specifically goes wrong? And why? The research converges on five root causes.

Why AI Pilots Fail— The Five Root Causes

The RAND Corporation's 2024 study on AI project failure identified five root causes— and four of the five are organizational9. The most common cause: business leadership misunderstood the problem the AI was meant to solve.

Four of the five root causes are organizational. The pattern centers on how firms handle AI— leadership engagement, data architecture, infrastructure, use case selection. Walk through each in turn, with the civil engineering manifestations made concrete.

Cause 1— Business leadership misunderstands the problem

This is RAND's number one9. Reinforced by IDC's Nadkarni quote earlier: pilots get launched at the board level on the strength of vendor demos, with no engagement from the engineers who would actually use the output.

In civil engineering, this is specific. A principal sees a generative design demo at a conference, asks IT to "evaluate the technology," and a six-month pilot starts without anyone defining what design problem is constrained or what success looks like. AI mastery is fundamentally about thinking skills and strategy, not just tactics. When the strategy work doesn't happen, the pilot is dead before procurement signs the contract. In our experience, the hidden costs of AI projects accumulate during pilot stages— mostly in engineer time spent on tools that never reach production.

Cause 2— Data quality and accessibility limitations

RAND #29. Reinforced by the Bluebeam data: 52% of AEC firms still use paper during design7. Design data is fragmented across BIM platforms. Project data sits in legacy ERP. Field data lives on paper or in unsorted photo libraries. AI cannot reach across these boundaries until the data architecture is rationalized.

Racel Amour, Autodesk's Head of Generative AI for AEC, said it directly10: "Organizing your data with clear standards... will make it easier for AI to deliver value." No model overcomes a fragmented data foundation. Firms that try end up with AI outputs that ignore half the project's reality.

Cause 3— Tech-first focus over user problems

RAND #39. This is the trap of buying a tool because a peer firm bought it. Deploying generative design without first asking what design problem is actually constrained. Running ChatGPT pilots that produce AI slop instead of usable engineering work.

The fix is unglamorous: start with the work, not the tool. Talk to the engineers who would use the output. Ask what they actually find painful. Then look for tools that fit the pain.

Cause 4— Inadequate infrastructure

RAND #49. Bentley/Pinsent Masons' 2025 survey found that more than 33% of infrastructure firms have limited or no project controls to manage AI-related risks11. Even firms that build the model haven't built the governance.

This matters more in civil engineering than in most industries. Project controls are how engineering work gets supervised, signed off, and recorded. Without controls calibrated for AI-assisted work, every output is unaccountable. That's a liability problem, not a productivity problem.

The infrastructure failure looks specific in practice. An AI tool produces a calculation. No one logs which inputs produced it, which engineer reviewed it, or which version of the model generated it. Six months later, an audit asks how a decision was made. The trail goes cold. The governance work isn't glamorous, but its absence is what turns a successful pilot into one that can't survive professional review.

Cause 5— The problem is too difficult for current AI

RAND #59. Some engineering work sits genuinely outside what current generative AI can do reliably. Stamp-bearing structural calculations. Safety-critical design judgment. Regulatory-binding interpretation.

The technology can support these tasks. It cannot perform them in production-grade liability terms. More on that in Section 7, where ASCE Policy 57312 makes the constraint explicit.

When civil engineering firms launch AI pilots without an AI champion who understands both the engineering work and the data architecture, RAND's root cause #1 is already in motion. The other four follow quickly after.

The five causes describe what stops AI pilots from reaching production. The next question is what production-grade AI in civil engineering actually looks like— and three firms have published enough detail to learn from.

What Production-Grade AI Looks Like— Three Firm Deployments

Production-grade AI in civil engineering looks specific. It's an integrated capability built on top of structured data, governed by engineering judgment, and acquired through specialized vendor partnerships rather than internal builds. Three firms— AECOM, Bechtel, and Haskell— illustrate the pattern at different scales.

AECOM acquires Consigli ($390M, December 2025)

AECOM, the largest engineering firm in the world by revenue, did not build AI in-house. In December 2025, the firm acquired Consigli, an Oslo-based AI startup founded in 2020, for approximately $390 million (4 billion NOK), per reporting in Norwegian daily Dagens Næringsliv13. Consigli brands itself as "The Autonomous Engineer." Its capabilities span space analysis and optimization, MEP load calculations, clash-free Level 3 BIM, and tender document preparation.

The vendor's claims are aggressive. Consigli says its technology can reduce engineering time by up to 90% and cut material use by 20%14. Treat that as a vendor claim, not a documented outcome. The verifiable fact is the acquisition itself.

AECOM is a $16 billion company with 51,000 employees globally. Its answer to AI strategy was acquire, not build.

That pattern matters. If the largest engineering firm in the world chose buy over build, the question for $20M-$100M firms isn't whether to follow the same logic. It's how to scale that logic to their own constraints.

Bechtel's BDAC— Big Data & Analytics Center of Excellence

Bechtel built internal capability and layered partnership on top. Its Big Data & Analytics Center of Excellence (BDAC) operates in partnership with data science firm Miner & Kasch and processes a 5-petabyte data lake using a 3D neural network for construction sequencing15.

A 5-petabyte data lake is a useful detail to sit with. That's the scale of structured project, schedule, and field data Bechtel has accumulated and rationalized to make AI workable. It's not a model story. It's a data architecture story. The neural network is downstream of years of structured-data work that most firms haven't done yet.

The pattern here is more nuanced than simple buy-versus-build. Bechtel has the scale to develop AI in-house. It still chose partnership for specialized capability. AI agents for change management on megaprojects are deployed through this hybrid model— internal Bechtel context combined with Miner & Kasch's data science depth. The lesson generalizes: even firms with deep capability partner with specialists for narrow domains. And the partnership is specialized, not generic— Miner & Kasch is a data science firm with construction sector experience, not a horizontal AI vendor.

Haskell's earthwork balancing in Civil 3D

Haskell's deployment is the most replicable for $20M-$100M firms. On a 65-acre site development, Haskell's civil engineering team balanced more than 260,000 cubic yards of earthwork using AI-powered grading optimization embedded inside Autodesk Civil 3D16— a tool the engineers were already using. John Buehrig, P.E., described the gain plainly17: "The software runs tens of thousands of grading iterations in a matter of minutes, whereas traditional calculations would take days, even weeks."

Notice what's not in that sentence. No mention of a chatbot. No "AI assistant." No company-wide deployment. Haskell embedded AI-driven grading optimization inside a tool engineers already used. Then it balanced 260,000 cubic yards on a 65-acre site in iterations that would have taken weeks by hand.

Mid-large firms can repeat this. Pick one specific high-value workflow. Embed AI inside the tool the engineers already use. Measure the outcome in engineering hours and material outcomes, not in "AI hours saved."

The pattern across all three

FirmApproachUse CasePattern Extracted
AECOMAcquired specialist (Consigli, $390M)Design automation across disciplinesBuy from specialists, even at $16B revenue
BechtelInternal capability + partnership (Miner & Kasch)Construction sequencing, change management on megaprojectsPartner for specialized AI even with internal teams
HaskellEmbedded AI in existing tool (Civil 3D)Earthwork balancing on a 65-acre siteSingle use case, existing workflow, measurable outcome

Four traits show up across all three deployments:

  1. Specialist vendor partnership over internal build
  2. A single, well-scoped use case (rather than "deploy AI broadly")
  3. AI embedded inside existing engineering workflow tools
  4. Outcomes measured in engineering hours and material outcomes, not "AI hours saved"

The pattern across these three firms— buy, don't build alone— is supported by the most rigorous available data on AI deployment success rates.

The 2x Rule— Buy from Specialists, Don't Build Alone

MIT NANDA's analysis of 300 enterprise AI deployments found a clear pattern. Specialized vendor partnerships succeed about 67% of the time. Internal builds succeed only one-third as often2. The success differential is roughly 2x in favor of buying from specialists.

This is the headline finding of the largest publicly available analysis of enterprise AI deployment outcomes. And the build-vs-buy decision is rarely as close as vendor pitches make it sound.

Pull quote: Specialized vendor partnerships succeed at AI deployment about 67% of the time. Internal builds succeed only one-third as often. (MIT NANDA, The GenAI Divide, August 2025)

Aditya Challapally, MIT NANDA's lead author, explained the mechanism18: younger companies excel because they "pick one pain point, execute well, and partner smartly." That sentence applies upward. AECOM's $390M Consigli acquisition is the megafirm version of "partner smartly." Even at $16B revenue and 51,000 employees, AECOM chose buy over build.

For firms in the $20M-$100M range: "specialized vendor partnership" doesn't mean a $390M acquisition. It means licensed AI tooling from established vendors— Autodesk, Bentley, ALICE Technologies, nPlan, Consigli (now under AECOM), UpCodes for code research— combined with a fractional implementation partner who can carry the AI champion role for the first 6-12 months. The work is in matching tools to specific high-value workflows. Not in maximizing the AI footprint.

A legitimate concern surfaces here: vendor lock-in. Most $20M-$100M civil engineering firms cannot afford to build AI in-house, and the data says they should not try anyway. Contractually protect data and IP. Require export-friendly formats. Choose vendors with API access. The 2x success differential outweighs the lock-in risk.

There's a second concern that comes up in firm conversations: if a vendor solves the problem and competitors use the same vendor, where does the firm's competitive edge go? The honest answer is that the edge moves up the stack. Vendors handle the AI work. The firm's edge moves to which workflows it picks, how it integrates AI into engineer judgment, how it captures the time savings into more billable work or sharper proposals. The vendor commoditizes the model. The firm doesn't.

Knowing that buying beats building is the easy part. The harder question: where does a firm start, and what does the first six months actually look like?

Practical First Steps for $20M-$100M Civil Firms

Civil engineering firms that move AI from pilot to production share three early decisions. They assign clear ownership. They pick low-stakes high-frequency use cases first. And they invest in data foundation work in parallel with their first deployments.

This is where an AI strategy framework that fits engineering firm constraints becomes more useful than another tool comparison. The work is sequencing, not selection.

Assign clear ownership

The American Council of Engineering Companies (ACEC) found that 63% of member firms now have an AI strategy in place or are actively building one— an 11-point increase from the previous year19. That's the headline number. The finding underneath it is what matters: the firms in the 63% have a named owner.

For smaller firms, that means an AI champion— a single person, usually a principal or senior engineer with technical fluency, who owns the AI roadmap. For larger firms, it means an AI steering committee with cross-departmental representation. For $20M-$100M firms specifically, a fractional AI implementation partner can fill the champion role for the first 6-12 months while the firm builds internal capability. Early ownership and team alignment are AI's leading indicators.

The 27% of firms making AI work aren't smarter than the 73% who haven't deployed. They have an owner.

Pick the right first use case (low-stakes, high-frequency)

Don't pick the highest-stakes engineering work for your first AI use case. Pick the highest-volume one— proposal automation, code research, meeting transcription— where errors are recoverable and the workflow is well-understood.

Bluebeam's data on early adopters supports the math. Sixty-eight percent of AEC early adopters saved at least $50,00020. Forty-six percent reclaimed 500–1,000 hours using AI tools20. These returns don't come from generative design pilots that touch safety-critical work. They come from the unglamorous workflow tasks: proposal automation, RFI/RFP turnaround, document summarization, code research (UpCodes), meeting transcription.

Barge Design Solutions cut health and safety plan creation from 8-10 hours to 10-15 minutes using an AI-powered assistant21. That's a low-stakes, high-frequency target. The error mode is a missed clause that gets caught by the engineer reviewing the output. No PE stamp depends on the first draft.

What's not a first target? Stamp-bearing structural calculations. Generative design optimization in safety-critical applications. Regulatory interpretation that binds engineering judgment. Those come later, once governance is in place.

Jeff Sample at Bluebeam summed up the targeting principle22: "AI is not going to do everything for everybody, but the 27% in our report who are using AI knew what their core problems are and how AI could solve them."

Build the data foundation in parallel

The data fragmentation problem from Section 2— BIM, ERP, paper— doesn't get fixed before deployment. It gets worked on during deployment.

A practical first 90 days:

  1. Name a data steward. One person responsible for knowing where engineering records actually live.
  2. Document the data architecture. BIM platforms in use, ERP systems, paper-based workflows still in production.
  3. Set standards for new project data structure. Apply on the next project, not retroactively.
  4. Pick a vendor whose AI works inside your existing tools. Avoid platforms that require ripping and replacing.
  5. Define success metrics before signing the contract. Measurement architecture matters as much as model selection. Engineering hours and material outcomes, not vanity metrics.

Don't wait until data is "clean" to deploy. But don't deploy without knowing what the data architecture is.

A note on what to measure. Most firms instrument AI deployments around tool-level metrics— hours logged in the tool, prompts run, documents generated. That's a vanity layer. The metrics that predict whether a pilot reaches production are downstream: hours of engineer time freed for higher-value work, RFP turnaround time, percentage of standard documents drafted by AI and approved with minimal edits, accuracy of AI-generated outputs against engineer review. Track those from week one of the pilot. Without the numbers, the production decision is a leadership opinion contest, not a data-backed call.

The practical steps work because they respect a constraint that's easy to forget. AI cannot bear engineering responsibility. ASCE has been explicit about this.

The Ethics Constraint— ASCE Policy 573

ASCE Policy Statement 573, adopted in July 2024, establishes the constraint every civil engineering AI deployment must respect12: "AI cannot be held accountable, nor can it replace the training, experience, and judgment of a professional engineer."

It is the design constraint every production AI deployment in civil engineering already operates under.

ASCE's Code of Ethics Section 1h reinforces the broader frame23: engineers must "consider the capabilities, limitations, and implications of current and emerging technologies when part of their work." In practical terms: every AI output that touches an engineering decision needs a documented human review checkpoint. A reviewer. A record. A signoff. Not optional.

This is where the framing of AI as intellectual augmentation lands. AI augments engineering judgment. It does not replace the engineer's signature, accountability, or liability. Every production deployment must be architected around this fact.

The implication for tool selection is concrete. Workflows where the engineer reviews and approves the output before it leaves the firm— proposal automation, code research, plan summarization, meeting transcription— are AI-ready under Policy 573. Workflows where AI output goes directly into a stamped deliverable without a reviewing engineer in the loop are not. The constraint shapes the design.

The firms that move AI from pilot to production design around the policy and the data, not around the latest tool. Policy 573 isn't a brake on AI deployment. It's the design specification that separates production-grade AI from another stalled pilot.

The Path Forward

If your civil engineering firm has run AI pilots that didn't reach production, the cause is almost certainly organizational. The five root causes in this article describe most failures. The prescription— single use case, specialized vendor partnership, data foundation work in parallel, engineer accountability preserved— describes most successes.

The 88% failure rate is not the future. Firms in the 12% have a documented pattern. AECOM bought. Bechtel partnered. Haskell embedded a single high-value workflow inside a tool engineers were already using. None of those three firms is irreplicable for a $20M-$100M civil engineering practice. The scale is different. The pattern is the same.

Usman Shuja, Bluebeam's CEO, framed the road forward this way24: "The biggest barriers... in 2026 aren't cost— they're complexity, culture, and connection." AI in civil engineering is a leadership, data, and partnership problem. Solve those three and the technology takes care of itself.

For firms that want to skip directly to the production-deployment pattern, a fractional AI implementation partner can carry the AI champion role for the first 6-12 months, run the AI Strategy Audit that sequences the right tools to the right workflows, and hand the firm a plan it owns— without lock-in. The work isn't fancy. But it's the difference between 11 pilots in a year and the one that actually ships.

If that's the work you want to do, Dan Cumberland Labs is built for it.

References

  1. IDC + Lenovo (via CIO.com), "88% of AI pilots fail to reach production - but that's not all on IT" (2025) — https://www.cio.com/article/3850763/88-of-ai-pilots-fail-to-reach-production-but-thats-not-all-on-it.html
  2. MIT NANDA, "The GenAI Divide: State of AI in Business 2025" (2025) — https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
  3. RAND Corporation, "The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed: Avoiding the Anti-Patterns of AI" (2024) — https://www.rand.org/pubs/research_reports/RRA2680-1.html
  4. IDC (via CIO.com), "88% of AI pilots fail to reach production" (2025) — https://www.cio.com/article/3850763/88-of-ai-pilots-fail-to-reach-production-but-thats-not-all-on-it.html
  5. Bluebeam, "2026 AEC Technology Outlook Report" (2025) — https://press.bluebeam.com/2025/10/new-bluebeam-report-shows-early-ai-adopters-in-aec-seeing-significant-roi-despite-uneven-adoption/
  6. McKinsey & Company / QuantumBlack, "The State of AI in 2025: Agents, Innovation, and Transformation" (2025) — https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
  7. Bluebeam (via ASCE), "Architecture, Engineering, Construction Sector Slow to Adopt AI, Survey Shows" (2025) — https://www.asce.org/publications-and-news/civil-engineering-source/article/2025/12/18/architecture-engineering-construction-sector-slow-to-adapt-ai-survey-shows
  8. Bentley Systems / Pinsent Masons / Mott MacDonald / Turner & Townsend (via ASCE), "AI Use in Infrastructure Set to Soar, As Firms Weigh Risks and Returns" (2025) — https://www.asce.org/publications-and-news/civil-engineering-source/article/2025/12/08/ai-use-in-infrastructure-set-to-soar-as-firms-weigh-risks-and-returns
  9. RAND Corporation, "The Root Causes of Failure for Artificial Intelligence Projects" (2024) — https://www.rand.org/pubs/research_reports/RRA2680-1.html
  10. Autodesk (via Building Design + Construction), "AI in AEC: Where firms should start and how to scale adoption" (2025) — https://www.bdcnetwork.com/aec-tech/article/55359703/ai-in-aec-where-firms-should-start-and-how-to-scale-adoption
  11. Bentley Systems / Pinsent Masons / Mott MacDonald / Turner & Townsend (via ASCE), "AI Use in Infrastructure Set to Soar" (2025) — https://www.asce.org/publications-and-news/civil-engineering-source/article/2025/12/08/ai-use-in-infrastructure-set-to-soar-as-firms-weigh-risks-and-returns
  12. ASCE, "Mishandling AI Tools Puts Civil Engineers at Risk for Ethical Violations" (2025) — https://www.asce.org/publications-and-news/civil-engineering-source/civil-engineering-magazine/issues/magazine-issue/article/2025/03/mishandling-ai-tools-puts-civil-engineers-at-risk-for-ethical-violations
  13. Mogin Law LLP (analysis of Dagens Næringsliv reporting), "AECOM's $390 Million Bet on Artificial Intelligence" (2025) — https://moginlawllp.com/artificial-intelligence-acquisition-aecom-consligli/
  14. Consigli (vendor claim, via Mogin Law LLP), "AECOM's $390 Million Bet on Artificial Intelligence" (2025) — https://moginlawllp.com/artificial-intelligence-acquisition-aecom-consligli/
  15. Bechtel, "Applications of Artificial Intelligence in EPC" (2024) — https://www.bechtel.com/newsroom/blog/innovation/applications-of-artificial-intelligence-in-epc/
  16. Haskell, "Earthwork Balancing with AI: Haskell Civil Engineers' Success Story" (2025) — https://www.haskell.com/insights/earthwork-balancing-with-ai-haskell-civil-engineers-success-story/
  17. Haskell, "Earthwork Balancing with AI: Haskell Civil Engineers' Success Story" (2025) — https://www.haskell.com/insights/earthwork-balancing-with-ai-haskell-civil-engineers-success-story/
  18. MIT NANDA (via Fortune), "MIT report: 95% of generative AI pilots at companies are failing" (2025) — https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
  19. ACEC Research Institute, "Engineering Business Sentiment Survey Q1 2025 (cited in ACEC Technology Committee Primer)" (2025) — https://www.acec.org/resource/why-and-how-a-primer-on-ai-integration-for-engineering-firms-from-the-acec-technology-committee/
  20. Bluebeam, "2026 AEC Technology Outlook Report" (2025) — https://press.bluebeam.com/2025/10/new-bluebeam-report-shows-early-ai-adopters-in-aec-seeing-significant-roi-despite-uneven-adoption/
  21. Building Design + Construction, "AI in AEC: Where firms should start and how to scale adoption" (2025) — https://www.bdcnetwork.com/aec-tech/article/55359703/ai-in-aec-where-firms-should-start-and-how-to-scale-adoption
  22. Bluebeam (via ASCE), "Architecture, Engineering, Construction Sector Slow to Adopt AI, Survey Shows" (2025) — https://www.asce.org/publications-and-news/civil-engineering-source/article/2025/12/18/architecture-engineering-construction-sector-slow-to-adapt-ai-survey-shows
  23. ASCE, "Mishandling AI Tools Puts Civil Engineers at Risk for Ethical Violations" (2025) — https://www.asce.org/publications-and-news/civil-engineering-source/civil-engineering-magazine/issues/magazine-issue/article/2025/03/mishandling-ai-tools-puts-civil-engineers-at-risk-for-ethical-violations
  24. Bluebeam (via ASCE), "Architecture, Engineering, Construction Sector Slow to Adopt AI, Survey Shows" (2025) — https://www.asce.org/publications-and-news/civil-engineering-source/article/2025/12/18/architecture-engineering-construction-sector-slow-to-adapt-ai-survey-shows

Our blog

Latest blog posts

Tool and strategies modern teams need to help their companies grow.

View all posts
Featured image for Your Estimator Is Not Slow, Your Workflow Is