Technology in government has a long history of arriving with high expectations and underdelivering. Justin Fulcher has a specific theory about why, and it does not begin with the technology itself. The founder and former Department of Defense advisor argues that AI initiatives in the public sector tend to fall short when they are introduced into institutions that have not addressed the structural inefficiencies already working against their performance.
The term he uses is institutional drag the layered weight of siloed data, outdated procedures, and compliance frameworks designed for paper-era operations that collectively slow agencies below the pace their work requires. Funding and ambition, in Justin Fulcher’s analysis, are not the binding constraints. Process is. He has written that core systems across government, healthcare, defense, and infrastructure operate as though decades of technological progress have not occurred. Until the processes surrounding those systems are updated, new tools will largely conform to the old rhythms.
A Career That Crosses Sectors
Justin Fulcher brings unusual credibility to this argument. He built RingMD, a telemedicine platform that scaled across Asia under varied regulatory regimes, before moving into federal service as a Senior Advisor to the Secretary of Defense. In that role, he worked directly on acquisition reforms that shortened software procurement timelines from years to months, translating the general principle into concrete results at one of the world’s largest bureaucracies.
The practical insight from that experience is portable. AI tools that require major organizational change to operate, generate new compliance concerns, or introduce unfamiliar failure modes will face resistance in government settings regardless of their technical quality. The tools that succeed are those built with an accurate picture of institutional constraints from day one, reducing complexity rather than adding to it.
A Long-Term View
Justin Fulcher has been explicit that durable work is defined by stewardship over time, not by early certainty. For AI in government, that means treating deployment as the beginning of an ongoing effort rather than an endpoint. Clear goals, realistic timelines, and a sustained willingness to iterate in response to user feedback are what determine whether an AI investment builds lasting institutional capacity or becomes another expensive implementation that fails to change how work actually gets done. Refer to this article for related information.
Follow for more about Justin Fulcher on https://www.instagram.com/justinfulcher/?hl=en