# AI Can Make Words, But Not Meaning — And That's Your Ethics Stance

**By Dan Cumberland** · Published April 23, 2026 · Categories: AI Strategy

> AI systems generate text by predicting the most probable next token given prior context.  They are extraordinarily good at this.  They do not understand a...

## What AI Can Actually Do With Words \(and Where It Stops\)

AI systems generate text by predicting the most probable next token given prior context\.  They are extraordinarily good at this\.  They do not understand a single word they produce\.

John Searle demonstrated this in 1980 with what he called the Chinese Room\.  Picture a person in a room following rules to manipulate Chinese symbols\.  Correct responses emerge\.  But the person understands nothing— and neither does the room\.  According to the Stanford Encyclopedia of Philosophy[1](/blog/blog-construction-meaning#ref-1), Searle's conclusion was unambiguous: "a computer running a program that simulates a mind would not have a mind in the same sense that human beings have a mind\."  That person in the room is every AI system\.

The technical terms matter here\.  Syntax— the rules of language— can be followed perfectly without semantics— the meaning behind it\.  AI has very good syntax\.  It has no semantics\.

\(Modern AI models are far more sophisticated than the AI Searle had in mind\.  Philosophers will keep arguing about this\.  But for the business question— who is accountable for the professional judgment behind a firm's communications— his framing remains the right practical standard\.\)

What this distinction means in practice:

- **What AI does well:** Pattern recognition, structural coherence, drafting at scale, stylistic consistency
- **What AI cannot do:** Apply judgment, carry accountability, draw on lived experience, intend anything

Harvard Business Review[2](/blog/blog-construction-meaning#ref-2) put it plainly in March 2026: "The imprint of lived experience is what separates genuine thought leadership from the uninspired prompt output that festoons our feeds\."  AI can mirror the form of expertise\.  It cannot have had the experience\.  And that gap is exactly where [what generative AI actually does](/blog/what-is-generative-ai) stops and where your firm's value begins\.

## Why This Distinction Matters Specifically for AEC Firms

AEC firms are entering full\-scale AI adoption right now\.  Bluebeam's AEC Technology Outlook 2026[3](/blog/blog-construction-meaning#ref-3) found that only 27% of firms currently use AI in operations— but 94% of those plan to increase usage in the next year\.  That's not slow movement\.  That's the approach of a wave\.

And the biggest barriers aren't cost\.  They're complexity, culture, and connection[3](/blog/blog-construction-meaning#ref-3)\.  Which means what's holding AEC firms back isn't a budget problem— it's a meaning problem dressed in operational language\.

Consider what AEC professionals actually provide that no dataset contains:

- The GC whose subcontractor relationships go back fifteen years in one market
- The structural engineer who knows how this city's building department processes variance requests
- The architect whose site intuition has been refined by three dozen projects in a specific climate

None of that is in a training dataset\.  It cannot be\.  The Journal of Business Ethics[4](/blog/blog-construction-meaning#ref-4) identifies this directly: research on AI's implications for meaningful work defines it as "the perception that one's work has worth, significance, or a higher purpose\."  When AI communicates what the firm does, something gets lost in translation\.  Clients hire AEC firms for their judgment\.  Their communications are where that judgment is expressed\.  When AI writes those communications, the judgment is implied— but not actually present\.

That's the construction meaning problem\.  And it's the one AEC's existing ethics discourse has not addressed\.

## The Nine Ethics Issues AEC Has Documented — And the One It Missed

AEC ethics literature has documented nine key concerns about AI: job displacement, data privacy, data security, transparency, decision\-making conflicts, trust, reliability, surveillance, and liability\.  These are real\.  None of them is the one that firms are actually wrestling with\.

Research published in *Automation in Construction*[5](/blog/blog-construction-meaning#ref-5) cataloged all nine\.  They represent the field's current discourse— entirely operational, about what AI does to workflows, employment, and data integrity\.  Not one asks what AI\-generated professional communications mean for the authenticity of a firm's expertise\.

But existing professional ethics codes don't wait for rewriting\.  The AIA Code of Ethics requires a consistent pattern of reasonable care and competence[6](/blog/blog-construction-meaning#ref-6)\.  These aren't new constraints\.  They're existing obligations that now extend into AI\-assisted communications\.

There's also the copyright dimension\.  As Proving Ground documented[6](/blog/blog-construction-meaning#ref-6), the US Copyright Office has clarified that AI output cannot receive copyright if solely the result of prompts— it must pass a threshold of human\-influenced originality\.  For AEC firms producing documents under professional seal, this matters for IP ownership as well as ethics\.

ENR[7](/blog/blog-construction-meaning#ref-7) framed this as a safety concern: "The rush to adopt AI can lead to over\-reliance on systems that aren't fully understood or properly vetted, which is risky in the AEC world where legal and safety compliance is mandatory\."  That warning applies with equal force to communications\.  Using AI to represent professional judgment you haven't actually applied is exactly the over\-reliance ENR is describing\.

So what does an ethics stance on meaning actually look like in practice?

## An Ethics Stance Is Not a Policy Document — It's a Position

Your ethics stance on AI content is not a prohibition\.  It is a declaration: the firm decides what the words mean\.  AI helps produce them\.

The false binary to dissolve is "use AI" versus "be authentic\."  Both are true\.  All of it matters\.  California Management Review[8](/blog/blog-construction-meaning#ref-8) puts it directly: "authenticity is not a compliance checklist— it is a relationship that requires ongoing investment, honest communication, and actions that align with words over time\."  An ethics stance is one of those investments, not a separate policy document you file away\.

This isn't just philosophy\.  Fielding Jezreel is a federal grant consultant with a decade of expertise in one of the most technically demanding forms of professional writing\.  He uses AI tools extensively in his practice\.  His summary of what actually works is the article's thesis in a real person's voice: "It doesn't replace a grant writer\.  The magic is when you've got someone with deep content expertise and you pair that with AI\.  Neither one of those things, I think, are as strong alone\."

For an AEC firm, the parallel is direct\.  A structural engineer's twenty years of regional site knowledge, or an architect's intuition about how this city's permitting process actually works— these are the content\.  AI can help express them more efficiently\.  It cannot substitute for having them\.

FT Longitude research[9](/blog/blog-construction-meaning#ref-9) found that 78% of buyers say intelligent thought leadership increases their trust and likelihood to engage, while 73% warn that poor\-quality thought leadership actively damages reputation\.  The stakes are asymmetric\.  Trust builds slowly\.  It erodes fast\.

Here's what abdication looks like versus what partnership looks like:

```html-table
<table><thead><tr><th><strong>Abdication</strong></th><th><strong>Partnership</strong></th></tr></thead><tbody><tr><td>AI writes the proposal; no one reviews for professional judgment</td><td>AI drafts; licensed professional reviews, revises, and approves</td></tr><tr><td>Firm communicates at AI's level of generality</td><td>AI expresses judgment the firm actually possesses</td></tr><tr><td>Accountability is unclear</td><td>Clear: the professional is responsible for content and accuracy</td></tr></tbody></table>
```

For firms [building an AI culture that preserves firm identity](/blog/building-ai-culture), this is where that work lives— not in the tools, but in the decision about who owns what the words mean\.

The regulatory moment is arriving whether firms have articulated their stance or not\.

## The Regulatory Moment Is Closer Than It Looks

The EU AI Act's Article 50 transparency obligations require disclosure when AI generates text published "to inform the public on matters of public interest"— effective August 2026\.  For AEC firms communicating about projects, safety, or professional standards, the regulatory direction is clear, even where not yet locally binding\.

The exact language[10](/blog/blog-construction-meaning#ref-10):

> "Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated\."

For US\-only firms, this isn't a direct compliance obligation yet\.  But global regulatory momentum is pointing in one direction: toward disclosure and accountability for AI\-generated professional communications\.  Even firms with no EU operations should read the signal clearly\.

There's also the copyright dimension\.  Proving Ground[6](/blog/blog-construction-meaning#ref-6) notes that AI\-generated proposal narratives, design rationale documents, or project reports cannot receive copyright protection unless they pass a threshold of human\-influenced originality\.  For AEC firms producing documents under professional seal— this matters for IP ownership, not just ethics\.

Firms that have articulated their ethics stance already meet the spirit of these requirements\.  Firms that haven't are one authenticity crisis away from an uncomfortable public explanation\.  A solid [AI governance strategy](/blog/ai-governance-strategy) gives firms a framework before that moment arrives\.

## What Your Ethics Stance Actually Says

An ethics stance on AI content is three sentences\.  Most firms could write it in an afternoon— if they've done the thinking\.  The thinking is the work\.

Here's a template to adapt:

> *We use AI tools to improve the efficiency of our communications\.  Every client\-facing communication is reviewed, revised, and approved by licensed professionals who take responsibility for its content and accuracy\.  The professional judgment that makes our work valuable is ours— AI helps us express it more clearly\.*

Three elements every ethics stance should include:

1. **Disclosure of AI use** — clear, not buried
2. **Who reviews and approves** — named role, not vague "our team"
3. **Where professional accountability sits** — unambiguous

California Management Review research[8](/blog/blog-construction-meaning#ref-8) found that 71% of consumers feel frustrated by impersonal brand communications, and nearly 40% worry about being misled by brands using AI\.  These are B2C figures— take them directionally for the AEC context\.  But the underlying dynamic isn't sector\-specific: clients notice when the people they hired aren't present in the communications they receive\.

The firms that lead the next decade aren't the ones that avoided AI\.  They're the ones that figured out what AI cannot provide— and showed up with it, clearly and consistently\.

If you're building a [framework for these AI decisions](/blog/ai-decision-framework-founders) before your firm's next major communications push, now is the right time\.

## Frequently Asked Questions

**Can AI write construction firm proposals?**

AI can draft structure and language efficiently\.  But the professional judgment behind the proposal— site\-specific knowledge, risk assessment, client relationship context— must come from qualified professionals who remain accountable for the content\.  ENR[7](/blog/blog-construction-meaning#ref-7) and Frontiers in Built Environment[11](/blog/blog-construction-meaning#ref-11) both identify that over\-reliance on AI without human oversight is risky in AEC contexts where quality standards are non\-negotiable\.  AI\-generated proposals without genuine human review also lack the originality required for copyright protection[6](/blog/blog-construction-meaning#ref-6)\.

**Does the EU AI Act apply to construction companies?**

If a construction firm publishes AI\-generated text to inform the public on matters of public interest, EU AI Act Article 50[10](/blog/blog-construction-meaning#ref-10) requires disclosure— effective August 2026\.  US\-only firms aren't directly covered yet, but the regulatory direction globally is toward transparency and accountability requirements for AI\-generated professional communications\.

**What is an "ethics stance" on AI content?**

A firm's ethics stance on AI content is a clear, articulable position declaring which communications carry human professional judgment and which are AI\-assisted production\.  It's not a prohibition on AI use— it's a commitment to maintaining accountability for meaning, even as AI handles more of the production work\.  California Management Review[8](/blog/blog-construction-meaning#ref-8) frames this as authenticity as a strategic imperative, not a compliance checkbox\.

**How do AEC firms preserve authenticity while using AI?**

By ensuring licensed professionals review, revise, and take accountability for all client\-facing communications\.  Frontiers in Built Environment[11](/blog/blog-construction-meaning#ref-11) identifies human oversight as mandatory in responsible AI frameworks for AEC: "heavy reliance on AI systems without adequate human oversight may result in unchecked errors\."  AI handles efficiency\.  The professional supplies the judgment, site knowledge, and accountability clients are actually paying for\.

## Conclusion

AI can write about your thirty years of experience\.  It cannot have had them\.

The firms that lead in the next decade aren't the ones that avoided AI— they're the ones who stayed clear on what AI cannot provide and showed up with it\.  Your professional judgment, site knowledge, client relationships, and accountability aren't stylistic features\.  They are the meaning\.  AI can help you express that meaning more efficiently\.  What it cannot do is supply it\.

Your ethics stance is not a compliance item\.  It's a competitive differentiator\.  Articulate it clearly, early, and the trust you've built over decades becomes something AI\-generated content cannot replicate\.

If thinking through your firm's AI ethics position— or building an [AI strategy for AEC firms](/services/ai-strategy/) that preserves what makes your work distinctive— is on your list, that's work we do at Dan Cumberland Labs\.

## References

1. Stanford Encyclopedia of Philosophy, "The Chinese Room Argument" \(1980, continuously updated\) — [https://plato\.stanford\.edu/entries/chinese\-room/](https://plato.stanford.edu/entries/chinese-room/)
2. Harvard Business Review, "Has AI Ended Thought Leadership?" \(March 2026\) — [https://hbr\.org/2026/03/has\-ai\-ended\-thought\-leadership](https://hbr.org/2026/03/has-ai-ended-thought-leadership)
3. Bluebeam, "New Bluebeam Report Shows Early AI Adopters in AEC Seeing Significant ROI Despite Uneven Adoption" \(October 2025\) — [https://press\.bluebeam\.com/2025/10/new\-bluebeam\-report\-shows\-early\-ai\-adopters\-in\-aec\-seeing\-significant\-roi\-despite\-uneven\-adoption/](https://press.bluebeam.com/2025/10/new-bluebeam-report-shows-early-ai-adopters-in-aec-seeing-significant-roi-despite-uneven-adoption/)
4. Bankins & Formosa, "The Ethical Implications of Artificial Intelligence \(AI\) For Meaningful Work," Journal of Business Ethics \(2023\) — [https://link\.springer\.com/article/10\.1007/s10551\-023\-05339\-7](https://link.springer.com/article/10.1007/s10551-023-05339-7)
5. Arxiv / Automation in Construction, "Ethics of Artificial Intelligence and Robotics in the Architecture, Engineering, and Construction Industry" \(2024\) — [https://arxiv\.org/abs/2310\.05414](https://arxiv.org/abs/2310.05414)
6. Proving Ground, "Code and Conduct: Five Areas Where AI Confronts the Architect's Ethics" \(October 2025\) — [https://provingground\.io/2025/10/22/code\-and\-conduct\-five\-areas\-where\-ai\-confronts\-the\-architects\-ethics/](https://provingground.io/2025/10/22/code-and-conduct-five-areas-where-ai-confronts-the-architects-ethics/)
7. Engineering News\-Record, "What AEC Firms Need to Know: A Guide to Responsible AI Adoption" \(October 2024\) — [https://www\.enr\.com/articles/59500\-what\-aec\-firms\-need\-to\-know\-a\-guide\-to\-responsible\-ai\-adoption](https://www.enr.com/articles/59500-what-aec-firms-need-to-know-a-guide-to-responsible-ai-adoption)
8. California Management Review, "Authenticity in the Age of AI" \(December 2025\) — [https://cmr\.berkeley\.edu/2025/12/authenticity\-in\-the\-age\-of\-ai/](https://cmr.berkeley.edu/2025/12/authenticity-in-the-age-of-ai/)
9. FT Longitude, cited in MediaPost, "AI Hasn't Killed Thought Leadership — It's Just Weaponized the Imposters" \(March 2026\) — [https://www\.mediapost\.com/publications/article/413745/ai\-hasnt\-killed\-thought\-leadership\-its\-just\-w\.html](https://www.mediapost.com/publications/article/413745/ai-hasnt-killed-thought-leadership-its-just-w.html)
10. European Union, "Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems," EU AI Act \(2024\) — [https://artificialintelligenceact\.eu/article/50/](https://artificialintelligenceact.eu/article/50/)
11. Frontiers in Built Environment, "Responsible AI in Structural Engineering: A Framework for Ethical Use" \(July 2025\) — [https://www\.frontiersin\.org/journals/built\-environment/articles/10\.3389/fbuil\.2025\.1612575/full](https://www.frontiersin.org/journals/built-environment/articles/10.3389/fbuil.2025.1612575/full)


---

Source: https://dancumberlandlabs.com/blog/construction-meaning/
