Context
NCDOT needed a practical AI governance framework that could support responsible adoption across agency employees, researchers, contractors, vendors, and third parties. The agency needed guidance that could address everyday AI use, research workflows, data sensitivity, procurement decisions, and long-term enterprise adoption.
Challenge
The central challenge was translating a fast-moving and often abstract technology landscape into policy language that could be understood and applied by non-specialist users. The framework needed to be strict enough to reduce risks related to security, privacy, research integrity, and public trust, but flexible enough to support innovation and legitimate operational use.
My Role
I led the development of the two-phase framework, including the initial Acceptable AI Use Policy and a broader enterprise AI guideline. I shaped the policy structure, clarified use cases and risk categories, and translated technical AI considerations into agency-ready governance language.
Approach
The work separated immediate acceptable use guidance from broader enterprise AI planning. The first phase focused on user behavior, tool selection, data handling, restricted use, and responsible disclosure. The second phase addressed how the agency could evaluate AI tools, assess risk, manage adoption, and create consistent expectations for internal teams and external partners.
Output
The project produced a practical AI governance package that included an Acceptable AI Use Policy and an enterprise-level AI guideline. The materials defined how AI tools should be evaluated, selected, and used safely across different user groups and operational contexts.
Impact
The framework gave NCDOT a clearer path for responsible AI adoption. It helped move the conversation from informal experimentation to structured governance, reducing ambiguity around security, privacy, research integrity, and agency-wide AI use.