We are drafting an AI charter currently.
Our parent company has put the brakes on AI a few years ago and has since been adopting more of an "R&D is good" approach. (and to be clear, the pause on AI wasn't universal, but applied to most employees.)
I have been working to insert some 'quadrants' into our approach and focus on use cases.
For example, any company- or client-specific data is limited and only very specific AI tools may be used with that data.
I am suggesting we also have a use-based policy. So, if there is a certain use that isn't work-appropriate, we might allow it for 'personal-hardware and personal-time' so that any AI is used but company hardware, software, and employee time are excluded.
Then I'm also asking that employees pledge to not use AI for very specific excluded practices... for example there was a recent court case where an HOA tried to sue a multi-family developer, and they presented 'evidence' that was nothing more than fake recorded interviews with 'tenants' who didn't like the conditions. While this sort of thing sould be patently obvious to be unethical, I think we have to start with what should be obvious and making it concrete.
------------------------------
Andrew Fisher AIA
Yorba Linda CA
------------------------------
Original Message:
Sent: 02-06-2026 02:04 AM
From: Thesla Collier, Intl. Assoc. AIA
Subject: Are you formally governing AI use, or relying on individual judgment?
What guardrails have worked?
------------------------------
Thesla Collier
HNTB
------------------------------