Signals from the Field: What Government Cloud Security Panels Reveal About Enterprise Architecture Maturity — and Why FEAF Is the Root Cause
Observations and Recommendations from the Carahsoft GovExperience Summit
by Steve Else, Ph.D.

The federal government’s cloud security journey reflects both genuine progress and persistent architectural gaps.
Setting the Scene
The Carahsoft GovExperience Summit 2026 convened government leaders, technology partners, and public-sector innovators at the Carahsoft Conference and Collaboration Center in Reston, Virginia. Organized by Government Executive Media Group and underwritten by Carahsoft — with gold sponsors including Knox Systems and Salesforce — the event featured two parallel tracks: Change Management and Emerging Technology. Sessions spanned AI-enhanced efficiency, cloud-powered digital services, cybersecurity user experience, data analytics, and organizational silos.
The summit’s stated mission was to explore how AI, automation, and data intelligence are transforming government service delivery. Featured sessions included “Cloud-Powered Digital Services: Secure, Scalable, Inclusive Government,” “The UX in Cybersecurity,” “Leveraging AI to Enhance Efficiency and Readiness,” and “Overcoming Organizational Silos.” The agenda painted a picture of a federal technology community that is ambitious, aware of its challenges, and actively seeking operational answers to strategic questions.
The author attended the summit with the kind of interest that any enterprise architecture practitioner brings to these convenings: not to judge, but to listen. Summits like GovExperience are valuable precisely because they surface the real state of practice — the unscripted moments, the questions that produce long pauses, the gaps between polished slide decks and the lived reality of implementation. What I heard confirmed something I have observed across dozens of government and industry forums over the past several years: the federal cloud security conversation has matured considerably in its ambitions but has not yet matured proportionally in its architectural foundations.
That observation is not a criticism. It is a diagnosis — and diagnoses are only useful if they lead to treatment plans. This article offers both.
Note: A short video has been created to summarise key points from this article, which can be viewed here.
What the Panels Revealed: An Architecture Maturity Reading
I sat through two panels that, taken together, provided an unusually clear reading of where much of the federal government stands on cloud security architecture. I will not name individuals — the value of this analysis lies in the patterns, not the personalities — but I will describe what I observed with the candor that our profession requires.
Panel One: Integrating Security into Cloud Platforms
The first panel was organized around a compelling premise: how to integrate security into cloud platforms to identify and block threats. The panelists were positioned as government experts, and the topic could not have been more relevant. Cloud-native security integration is one of the defining challenges of modern enterprise architecture, and the federal government — with its complex compliance landscape, multi-vendor ecosystems, and high-consequence data environments — faces this challenge at a scale and sensitivity that few private-sector organizations can match.
What I observed, however, was a panel discussion that stayed at the surface. Responses lacked architectural depth. When the moderator posed a direct and well-framed question — How would a multi-cloud environment complicate threat detection, and what strategies exist or are planned to master multi-cloud security? — the panelists could not answer. There was no discussion of cross-cloud control plane integration, no mention of federated identity architectures spanning providers, no reference to the security control mapping challenges that arise when workloads are distributed across AWS GovCloud, Azure Government, and other FedRAMP-authorized environments.
A recurring theme was “managing the data supply chain” — a phrase that appeared multiple times throughout the discussion. The concept is sound and increasingly important. But it was discussed without an operational framework behind it: no data lineage models, no trust boundary definitions, no custodial handoff protocols. The phrase functioned more as a shared vocabulary term than as an architectural discipline, which is a pattern I have seen before and which warrants its own analysis below.
A thread that ran through much of Panel One — and would resurface even more forcefully in Panel Two — was the bureaucratic nightmare of Authority to Operate (ATO) verification. Panelists described months-long ATO timelines as a source of deep frustration, particularly in an era where AI-assisted attackers iterate their techniques in hours, not quarters. The temporal mismatch is stark: defenders are locked in sequential certification workflows governed by FAR, FedRAMP, and FISMA while adversaries run continuous, parallel assault loops that exploit the very windows of vulnerability those certification delays create.
What was most striking was the resignation. When pressed on whether ATO workflows could be reformed to match the pace of the threat landscape, the panelists offered little hope. Procurement culture, statutory frameworks, and risk-averse institutional norms were cited not as obstacles to be overcome but as immovable features of the environment. The candor was refreshing; the conclusion was alarming. If the defenders themselves do not believe that their authorization processes can evolve, the architecture community must ask whether continuous assurance models can be designed to satisfy the intent of existing frameworks while dramatically compressing the vulnerability window.
Panel Two: Managing Multiple Clouds
The second panel addressed multi-cloud management directly, and here the picture was more nuanced — and more instructive. Of four panelists, two were fluent and clearly operational. They spoke from experience. Their answers referenced specific architectural decisions, real trade-offs they had navigated, and concrete strategies for maintaining security posture across heterogeneous cloud environments. Their contributions demonstrated that operational excellence in multi-cloud governance is not theoretical — it is achievable, and some corners of the federal government are achieving it.
The other two panelists, however, were visibly out of their depth. Their responses were vague, relying on general statements about “the importance of security” and “working with our cloud providers” without offering specifics about how that work was structured, governed, or measured. AI was discussed briefly but superficially — treated as a feature to add to existing tools rather than as a transformation to the security operating model itself.
The contrast on that stage was striking, and it told a story that no slide deck could have communicated as effectively.
Notably, no panelist on either panel arrived with slides, a structured framework, or a visual aid of any kind. The discussions remained entirely conversational — reactive rather than commanding. An audience member hoping to see at least one presenter step forward with crisp visuals, a compelling architecture diagram, or a rehearsed narrative that demonstrated mastery would have been disappointed. Even the two stronger panelists on this second panel operated conversationally rather than authoritatively; none arrived prepared to teach the room.
This is not a minor point. Government panels at industry summits are how the federal technology community projects competence to vendors, partners, the public, and — critically — to the workforce pipeline it needs to attract and retain. Underprepared panelists, however well-intentioned, reinforce a narrative of government lagging behind the private sector. That narrative, once established, becomes self-fulfilling: top talent gravitates toward organizations that project mastery, and vendors calibrate their engagement based on the sophistication they observe. The presentation readiness of government representatives at public forums is, in this sense, a strategic asset — or a strategic liability.
| “ The value of these panels was not in the polished answers but in what the silences and hesitations revealed. They provided an unfiltered maturity reading of where much of the federal government stands on cloud security architecture — and the reading, while sobering, is instructive. ” |
Seven Maturity Signals and What They Mean
From these two panels, I identified seven distinct maturity signals — recurring patterns that, taken together, compose a coherent picture of the current state of federal cloud security architecture. Each signal is a data point, and each points toward specific architectural interventions that the enterprise architecture community can and should provide.
Signal 1: Security as Tooling, Not Architecture
Across both panels, cloud security was consistently framed as a product procurement problem — which tools should we buy? — rather than as an architecture governance problem — how do we design integrated security across the enterprise? This distinction is fundamental. Tools are components; architecture is the discipline of organizing components into coherent, governed, and evolvable systems. When an organization treats security as a collection of tools rather than as an architectural concern, it inevitably ends up with fragmented coverage, redundant capabilities, and gaps that no single product can close.
On most enterprise architecture maturity models, this orientation corresponds to Level 1 (Initial) or Level 2 (Developing) — stages characterized by ad hoc, project-driven decision-making without enterprise-level coordination. The panelists were not describing immature organizations in pejorative terms; they were describing organizations that have not yet made the leap from tool-centric to architecture-centric security governance. That leap is achievable, but it requires deliberate investment in architecture practice, not just technology procurement.
Signal 2: The Multi-Cloud Blind Spot
The inability to articulate a multi-cloud mastery strategy — when directly asked — was perhaps the most revealing moment across both panels. It suggests that many agencies adopted multiple cloud providers tactically, workload by workload, without an enterprise cloud architecture governing those decisions. This is not unusual; it reflects the organic way that cloud adoption unfolded across the federal government, driven by individual program offices, FedRAMP authorization timelines, and vendor relationships rather than by a unified architectural vision.
But the consequences are significant. Fragmented multi-cloud adoption without architectural governance produces exactly the conditions that the NIST Multi-Cloud Security Public Working Group (MCSPWG) was established to address: fragmented visibility, inconsistent security controls across providers, identity federation gaps, and dangerous blind spots in threat detection. When your security monitoring architecture was designed for a single cloud and your workloads now span three, you do not have a multi-cloud security strategy — you have three single-cloud security strategies and a prayer.
Signal 3: Data Supply Chain Without a Supply Chain Architecture
The phrase “data supply chain” appeared repeatedly across both panels, which is itself a sign of progress — the concept has entered the working vocabulary of federal technology leaders. But vocabulary is not architecture. In every instance I observed, the term was used without an operational definition, without reference to a lineage model, and without a governance framework that would make it actionable.
Data supply chain management, properly understood, is an architecture discipline. It requires data lineage tracing — the ability to track every data element from its source, through every transformation, to its point of consumption. It requires trust boundary mapping — clear definitions of where data crosses organizational, jurisdictional, or classification boundaries and what controls apply at each crossing. It requires custodial handoff protocols — formalized processes for transferring data stewardship responsibility as data moves through the chain. Without these architectural foundations, “data supply chain management” remains an aspiration rather than a capability.
Signal 4: AI as Afterthought, Not Operating Model
When AI was discussed — and it was discussed only briefly, despite its prominence on the summit agenda — it was framed as an enhancement to existing processes. We are looking at AI to improve our threat detection. We are exploring AI for automating compliance checks. These are reasonable starting points, but they reflect a tool-level understanding of AI integration rather than an operating model transformation.
The distinction matters enormously. Using AI tools within an existing security operating model produces incremental improvements. Redesigning the security operating model around AI-augmented workflows — where AI agents handle continuous monitoring, pattern recognition, anomaly detection, and initial triage while human architects and analysts focus on consequential decisions, strategic response, and governance oversight — produces a fundamentally different capability. This is the difference between adding a spell-checker to a manual typewriter and redesigning the writing process around a word processor. The latter requires rethinking workflows, roles, decision authorities, and performance metrics. It is architecture work, and it was notably absent from the discussion.
Signal 5: The Competency Divide
The sharp contrast between the two fluent panelists on Panel Two and the two who struggled was not a commentary on individual capability — it was a window into a systemic workforce challenge. Cloud security architecture requires practitioners who understand both cloud-native architectures (container orchestration, serverless patterns, cloud-native identity, infrastructure as code) and enterprise governance (architecture review boards, decision rights frameworks, portfolio management, compliance traceability). This combination of skills is rare, and current federal workforce development pipelines do not reliably produce it.
The two strong panelists likely developed their expertise through direct operational experience — learning by doing, in environments that demanded both technical depth and governance discipline. That path works for individuals, but it does not scale. The competency divide revealed on that stage is a structural problem that requires a structural solution: deliberate, funded, sustained investment in competency development programs that integrate cloud-native technical skills with enterprise architecture governance frameworks.
Signal 6: The ATO-Agility Paradox
The most consequential finding may have been the panelists’ candid acknowledgment that Authority to Operate (ATO) verification timelines — often stretching six to eighteen months — are fundamentally incompatible with the pace of modern cyber threats. AI-assisted attack toolkits now enable adversaries to discover vulnerabilities, generate exploits, test evasion techniques, and launch campaigns in days or weeks. The federal authorization model, designed for an era of annual security assessments and stable threat landscapes, creates a temporal mismatch that no amount of tool procurement can close. Defenders operating on quarterly or annual certification cycles are structurally disadvantaged against attackers operating on continuous iteration cycles.
What elevated this from a familiar complaint to a maturity signal was the resignation that accompanied it. The panelists did not describe ATO reform as difficult but achievable; they described it as essentially impossible given current procurement culture and statutory constraints. This learned helplessness — the belief that authorization workflows cannot evolve — is itself a maturity indicator. Organizations at higher maturity levels do not accept process constraints as permanent; they architect around them, designing continuous assurance models that satisfy the intent of FISMA and FedRAMP while compressing the window of unmonitored vulnerability. The absence of this architectural thinking on the panels suggests that the ATO-agility paradox is not just a process problem but a conceptual one: the practitioners closest to the problem have not yet framed it as an architecture challenge amenable to an architecture solution.
Signal 7: The Presentation Readiness Gap
Public conference appearances are how government projects competence, shapes vendor behavior, attracts talent, and reassures the public that critical infrastructure is in capable hands. The presentation readiness observed across both panels — no slides, no structured frameworks, no visual aids, no rehearsed narratives — suggests a systemic underinvestment in preparing government practitioners for public-facing roles. This is not about stage presence or charisma; it is about the ability to communicate architectural thinking with the clarity, structure, and authority that the subject matter demands.
The gap is addressable but requires deliberate investment. Organizations at higher maturity levels treat public communication as a professional competency, not a personality trait. They invest in structured briefing preparation — developing visual frameworks, rehearsing key narratives, vetting talking points, and ensuring that every representative who takes a public stage is equipped to project mastery. The enterprise architecture community should advocate for and help design “conference readiness” modules within government competency development programs — not as vanity exercises but as investments in institutional credibility. A government panelist who arrives with a compelling architecture diagram, a clear framework, and a rehearsed narrative does more for public confidence in federal cybersecurity than any number of press releases. The upskilling of government panelists and public communicators is not a nice-to-have; it is a strategic imperative that directly affects the government’s ability to attract talent, command vendor respect, and maintain public trust.
| Signal | What Was Observed | What It Indicates |
| Security as Tooling, Not Architecture | Cloud security framed as a procurement decision rather than an architecture governance discipline | Level 1–2 maturity; ad hoc, project-driven security without enterprise coordination |
| The Multi-Cloud Blind Spot | Inability to articulate a multi-cloud mastery strategy when directly asked | Tactical, workload-by-workload cloud adoption without enterprise cloud architecture |
| Data Supply Chain Without Architecture | “Data supply chain” used repeatedly without operational definition or governance model | Vocabulary has outpaced practice; concept adoption without architectural foundations |
| AI as Afterthought | AI discussed as a feature enhancement, not as a transformation of the operating model | Tool-level AI adoption without redesign of security workflows, roles, or decision authorities |
| The Competency Divide | Stark contrast between operationally fluent panelists and those who struggled with specifics | Systemic workforce gap; cloud-native skills and EA governance rarely combined in one practitioner |
| The ATO-Agility Paradox | Months-long ATO timelines acknowledged as incompatible with AI-accelerated threats, with no reform path articulated | Learned helplessness around authorization processes; continuous assurance not yet framed as an architecture solution |
| The Presentation Readiness Gap | No panelist brought slides, frameworks, or visual aids; discussions stayed conversational rather than commanding | Systemic underinvestment in preparing government practitioners as public communicators; upskilling needed |
The Positive Case: Why These Signals Matter
It would be easy — and wrong — to read the maturity signals above as an indictment. They are not. They are a baseline measurement, and the fact that we can take this measurement at all is itself a sign of how far the federal cloud security conversation has come.
Five years ago, these panels would not have existed. The questions that were asked — about multi-cloud security integration, about data supply chain management, about AI-augmented threat detection — would not have been on the agenda because they were not yet in the operational vocabulary of most federal technology leaders. The fact that government leaders are now publicly discussing these challenges, standing on stages and engaging with them in front of peers and industry partners, indicates that the conversation has moved decisively from “whether” to “how.” That transition is significant, and it should be recognized.
Moreover, the two strong panelists on the second panel demonstrated something critically important: operational excellence in multi-cloud governance is achievable within government. Their fluency was not theoretical — it was earned through practice, and it showed. They are proof of concept. They demonstrate that the federal government can develop practitioners who combine deep cloud-native technical expertise with enterprise governance discipline, and that those practitioners can operate at a level that rivals or exceeds their private-sector counterparts. The question is not whether this is possible; it is how to make it the norm rather than the exception.
Events like the GovExperience Summit create the conditions for this kind of honest assessment. When government leaders step onto a stage and engage with hard questions — even when they cannot fully answer them — they are performing an act of institutional courage that deserves respect. The silences and hesitations that I observed were not failures; they were data. And data, properly analyzed and acted upon, is the foundation of improvement.
A Deeper Pattern: Echoes of Vectors
What we observed at the GovExperience Summit is not an isolated data point. It is a symptom of a systemic condition that has been documented at the highest levels of government leadership. Former Acting Secretary of the Navy Thomas B. Modly chronicled precisely this pattern in his 2023 book, Vectors: Heroes, Villains, and Heartbreak on the Bridge of the U.S. Navy — and this author’s analysis of that work reveals striking parallels with the maturity gaps on display at the summit.
Modly’s account of his nineteen months as Under Secretary and then Acting Secretary of the Navy reads, in part, as a case study in what happens when an institution lacks architectural thinking. During his tenure, Modly launched ambitious strategic initiatives: the Future Carrier 2030 study, the Integrated Naval Force Structure Assessment, the Education for Seapower Strategy, and the “Breaking the Mold” initiative at the Naval War College. Each represented an attempt to impose long-range, systems-level thinking on an institution that operated reactively. Each was either canceled by his successor, buried by the Office of the Secretary of Defense, or starved of support by uniformed leaders who, as Modly candidly observed, had grown “frightened” of controversy and controversial decisions. The U.S. Naval Institute’s review of Vectors captured the book’s essence: it is fundamentally about “what it takes to achieve institutional change in the Department of the Navy — and the forces that can resist that change.”
The parallels to our conference observations are direct and uncomfortable. At the GovExperience Summit, we witnessed panelists who could not articulate how to integrate security across cloud platforms — not because they lacked intelligence, but because they lacked a framework for thinking about cross-domain integration. We saw professionals unable to explain how to master multi-cloud environments — not because the answers do not exist, but because no architectural discipline had been applied to structure the problem. We heard resignation about ATO reform — not because reform is technically impossible, but because the institutional culture resists the kind of systemic redesign that architectural thinking demands.
Modly encountered the same resistance at the strategic level. His weekly “SECNAV Vectors” messages — written personally and distributed to every rank in the Department of the Navy — were themselves an architectural act: an attempt to create a coherent narrative across a sprawling organization, to align thousands of disparate actors around common strategic themes. That such an elementary governance mechanism was considered novel speaks volumes about the baseline maturity of institutional communication in the defense establishment.
The Common Thread: Absence of Architecture as a Discipline
What connects Modly’s strategic frustrations with the tactical struggles of government panelists at a cloud security conference is the absence of enterprise architecture as an institutional discipline — not architecture as a compliance exercise or a documentation requirement, but architecture as a way of thinking about complexity, integration, and systemic change.
When Modly pushed for the Integrated Naval Force Structure Assessment, he was attempting enterprise architecture at the force-design level — a holistic view of how platforms, capabilities, manning, and technology interact as a system. When the Office of the Secretary of Defense briefed the National Security Advisor on INFSA without Navy participation and then shelved the study, they were demonstrating exactly the kind of siloed, non-architectural decision-making that produces the incoherent security postures we observed at the summit.
When a conference panelist cannot explain how to integrate threat detection across AWS, Azure, and GCP, the root cause is the same: no one has taught that professional to think architecturally — to see the multi-cloud environment as an integrated system with shared governance, common standards, and coordinated operating rhythms rather than as three separate infrastructure contracts.
From Observation to Obligation
This pattern — documented by a former Secretary of the Navy at the strategic level, and observable by any informed attendee at a mid-tier government technology conference — imposes an obligation on the enterprise architecture community. If architectural thinking is the missing capability, then architects cannot simply lament its absence. We must actively export our discipline into adjacent domains: cloud security, acquisition reform, authorization workflows, and strategic planning.
The maturity signals identified in this article are not merely descriptive. They are diagnostic, and they point to a treatable condition. The treatment is not more tools, more vendors, or more compliance mandates. It is the cultivation of architectural thinking as a core government competency — embedded in training programs, reflected in panelist preparation, and operationalized in the frameworks that govern multi-cloud security, data supply chains, and authorization processes. Secretary Modly titled his communications “Vectors” because each one was intended to provide direction — a magnitude and a bearing. The enterprise architecture community must now provide its own vector: a clear, compelling articulation of how architectural thinking transforms the government’s ability to manage complexity, integrate systems, and respond to threats at the speed the adversary demands.
The Institutional Root Cause: Why FEAF Cannot Fill the Gap
The patterns documented above — from the summit panels to Secretary Modly’s strategic battles — raise an obvious question: where is the federal government’s enterprise architecture framework in all of this? The United States has, in fact, had one for over two decades. The Federal Enterprise Architecture Framework (FEAF), first published by the Office of Management and Budget in 1999 and updated periodically since, was designed to be precisely the kind of integrative discipline that the government so clearly lacks. That it has manifestly failed to fill this role is not merely an observation — it is a diagnosis that goes to the root cause of every maturity signal identified in this article.
FEAF, in its current form, is less an architecture framework than a classification and reporting mechanism. Its primary construct is a set of five Reference Models — the Performance Reference Model (PRM), Business Reference Model (BRM), Data Reference Model (DRM), Application Reference Model (ARM), and Infrastructure Reference Model (IRM) — that provide standardized taxonomies for categorizing federal IT investments. The intent behind these reference models was sound: by establishing a common vocabulary for describing IT capabilities across agencies, OMB hoped to identify opportunities for shared services, reduce duplicative investments, and improve cross-agency interoperability. But a classification system is not an architecture framework. Categorizing what exists is fundamentally different from designing what should exist and governing the transformation from the current state to the target state.
The critical gap is that FEAF has no executable change methodology. It provides no architecture development method — no equivalent of TOGAF’s Architecture Development Method (ADM), no iterative cycle of vision, baseline, target state, gap analysis, and migration planning. It provides no governance model — no architecture review board structure, no decision rights framework, no escalation protocols, no mechanism for resolving architectural conflicts across agencies or between agency priorities and government-wide standards. It provides no competency model — no definition of what skills a federal enterprise architect needs, no development pathways, no certification or proficiency standards. And it provides no maturity model — no framework for assessing an agency’s architecture capability and charting a progressive improvement path. In short, FEAF tells agencies how to categorize their IT investments. It does not tell them how to architect their enterprises.
Perhaps most damaging is that FEAF has no owner. No single entity in the federal government is responsible for maintaining, evolving, governing, or enforcing FEAF. OMB published it, but OMB is a budget oversight organization, not an architecture governance body. The Federal CIO Council has referenced FEAF in various policy documents, but the Council has neither the authority nor the operational capacity to function as an architecture governance board for the entire federal enterprise. GSA’s Technology Transformation Services has occasionally hosted FEAF-related resources, but hosting resources is not the same as owning and governing a framework. The result is an orphaned framework — technically in existence, occasionally referenced in policy documents and budget submissions, but practically inert. No one is accountable for its evolution. No one enforces its application. No one measures its effectiveness. FEAF exists in a bureaucratic limbo that is worse than having no framework at all, because its lingering presence creates the illusion that the architectural thinking gap has been addressed when it has not.
This is the most insidious consequence of FEAF’s continued existence. As long as the federal government can point to an “official” enterprise architecture framework, there is reduced institutional motivation to adopt world-class architecture thinking and practices. The introduction of frameworks like TOGAF, the Scaled Agile Framework (SAFe), or integrated approaches such as the Integrale Architecture framework is subtly undermined — not by active opposition, but by passive satisfaction with the status quo. Government executives who might champion a serious architecture practice can be told, “We already have FEAF.” Government architects who push for rigorous architecture development methods, governance structures, and competency programs are told that FEAF “covers” enterprise architecture for the federal government. It does not. But its existence provides a convenient excuse for inaction. The maturity gaps witnessed at the GovExperience Summit — security treated as tooling, multi-cloud architectures without governance, data supply chains without architecture, AI adopted without operating model redesign — are the predictable consequences of an architecture framework that classifies without governing, categorizes without designing, and exists without being owned.
| Capability | FEAF | World-Class Frameworks (TOGAF, SAFe, Integrale) |
| Architecture Development Method | None — no iterative development cycle | ADM (TOGAF); Lean-Agile delivery (SAFe); CAIL continuous loop (Integrale) |
| Governance Model | None — no review boards, decision rights, or escalation | Architecture Board, compliance reviews, decision logs |
| Change Methodology | None — no migration planning or transformation roadmap | Phased migration planning, transition architectures, implementation governance |
| Competency Model | None — no skill definitions or development pathways | Defined competency frameworks with proficiency levels and certification |
| Maturity Model | None — no capability assessment framework | ACMM, SAFe Measure and Grow, multi-dimensional maturity assessment |
| Cross-Domain Integration | Limited — Reference Models classify but do not integrate | Federated architecture with cross-domain viewpoints, shared governance |
| Ownership and Accountability | Orphaned — no governing body, no enforcement | Architecture Board with defined authority, review cadence, and escalation |
| Practical Impact on Practice | Minimal — primarily used for budget classification | Directly shapes architecture decisions, reviews, and organizational capability |
The remedy is not to reform FEAF — a quarter century of institutional neglect has rendered it beyond meaningful rehabilitation. The remedy is to acknowledge candidly that FEAF does not constitute a functioning enterprise architecture practice and to invest in the adoption, adaptation, and institutionalization of world-class architecture frameworks that provide what FEAF never has: executable methods, governance structures, competency development, and continuous improvement. The conference panelists who could not articulate a multi-cloud security strategy, the Secretary of the Navy whose strategic initiatives were buried by institutional inertia, the government professionals who arrive at public stages without the architectural fluency their roles demand — all are symptoms of the same root cause. The federal government has had a “framework” for over twenty years. What it has never had is an architecture practice. Building one is the work that lies ahead, and it is the work that the enterprise architecture community must lead.
Nine Recommendations for the Enterprise Architecture Community
The maturity signals from the GovExperience Summit point toward specific, actionable interventions. These recommendations are written for the enterprise architecture community — for the practitioners, leaders, and educators who have the expertise and the responsibility to provide the structural frameworks that turn aspiration into operational capability.
Recommendation 1: Establish Multi-Cloud Reference Architectures
Every agency operating in more than one cloud provider needs a published, maintained multi-cloud reference architecture. This is not a nice-to-have; it is a governance necessity. The reference architecture should define security control mappings across providers — showing how a given NIST 800-53 control is implemented in AWS GovCloud, Azure Government, Google Cloud, and Oracle Cloud, respectively, and how those implementations are validated as equivalent. It should define identity federation patterns that ensure consistent authentication and authorization regardless of which cloud hosts a given workload. And it should establish data residency policies that reflect both statutory requirements and operational realities.
The NIST Multi-Cloud Security Public Working Group is developing guidance that will inform these reference architectures. Enterprise architecture teams should engage with the MCSPWG now — contributing operational experience, reviewing draft guidance, and building internal expertise — rather than waiting for final publications. Reference architectures are living artifacts; building them is a continuous discipline, not a one-time project.
Recommendation 2: Architect the Data Supply Chain
The data supply chain is an architecture artifact, not a compliance checkbox. Enterprise architecture teams should treat data supply chain management as a first-class architecture discipline, applying the same rigor to data flows that they apply to application integration or network topology. This means implementing data lineage models that trace every data element from source to consumption, including every transformation, aggregation, and derivation along the way. It means defining trust boundaries at each custodial handoff — the points where data responsibility transfers from one organization, system, or classification domain to another — and documenting the controls that apply at each boundary. And it means establishing provenance verification for data entering AI and machine learning pipelines, where the quality and integrity of training data directly determine the reliability of the models that consume it.
The phrase “data supply chain” has entered the vocabulary. Now the architecture community must give it structure.
Recommendation 3: Integrate Security into the Architecture Governance Framework
Security should not operate as a parallel governance track with its own review processes, its own decision authorities, and its own documentation standards. It should be embedded in the enterprise architecture governance framework — the same framework that governs application architecture, infrastructure architecture, data architecture, and business architecture decisions. Architecture review boards should include security architecture as a standing agenda item, not an occasional guest. Security decisions should follow the same decisioning model as other architecture decisions, with clear escalation paths, documented authority boundaries, and full traceability from requirement to implementation to validation.
When security governance operates in parallel to architecture governance, the result is exactly what we observed on the panels: security discussions that lack architectural depth, and architecture discussions that treat security as someone else’s problem. Integration is the remedy.
Recommendation 4: Adopt AI-Augmented Security Operations as an Architecture Pattern
The enterprise architecture community should move beyond “AI for security” — which operates at the tool level — to “AI-augmented security architecture,” which operates at the operating model level. This means designing human-agent collaboration patterns for threat detection, incident response, and compliance monitoring. In these patterns, AI agents handle continuous monitoring, pattern recognition across high-volume data streams, anomaly scoring, and initial triage — tasks that benefit from machine speed and consistency. Human architects and analysts focus on consequential decisions: threat classification, response strategy, governance implications, and strategic adaptation.
The Augmented Architecture Office (AAO) model provides a framework for this transformation. The AAO model recognizes that AI does not replace architects; it augments them, handling the computational and pattern-recognition workload while human practitioners focus on judgment, context, and governance. Applying this model to security operations requires deliberate architecture work: defining the interfaces between human and AI decision-making, establishing escalation thresholds, designing feedback loops that improve AI performance over time, and documenting the governance framework that ensures accountability.
Recommendation 5: Close the Competency Gap with Structured Development
The competency divide observed on the panels is addressable, but not with ad hoc training or conference attendance alone. Agencies should invest in structured competency development programs that combine cloud-native architecture skills with enterprise governance frameworks. The goal is to produce practitioners who are fluent in both domains — who can design a Kubernetes security policy and present an architecture decision record to a governance board, who understand both the technical mechanics of cloud-native identity federation and the organizational dynamics of cross-agency data sharing agreements.
The SAFe competency model and TOGAF’s architecture skills framework both provide templates that can be adapted for cloud security architecture roles. But frameworks alone are not sufficient. Agencies also need structured mentorship programs that pair emerging practitioners with experienced architects, rotational assignments that expose practitioners to multiple cloud environments and governance contexts, and communities of practice that sustain learning beyond formal training. The two fluent panelists at the summit did not become fluent by accident; they became fluent through sustained, deliberate practice. The challenge is to create pathways that make that journey accessible to many more practitioners.
Recommendation 6: Create Cross-Cloud Observability as an Architecture Requirement
Unified observability across cloud providers should be an architecture requirement, documented in the multi-cloud reference architecture and enforced through architecture governance. Enterprise architects should define observability standards — log aggregation patterns, metrics normalization rules, trace correlation protocols — that ensure security teams have enterprise-wide visibility regardless of which cloud hosts a given workload. When a threat actor moves laterally from a compromised workload in one cloud to a target in another, the security team’s ability to detect and respond depends entirely on whether their observability architecture spans the boundary between providers.
This is not an operations concern to be delegated to cloud engineering teams after the architecture is complete. It is a foundational architecture requirement that shapes technology selection, integration design, and governance processes from the outset. Cross-cloud observability should be a first-class artifact in every multi-cloud reference architecture, with defined standards, validation criteria, and governance oversight.
Recommendation 7: Use Maturity Models to Drive Honest Assessment
The maturity gaps revealed at the summit are not unique to the panelists who happened to be on stage that day. They reflect a systemic condition across much of federal IT — a condition that is well-documented in GAO reports, inspector general assessments, and FITARA scorecards, even if it is rarely discussed with the specificity that architecture practitioners require.
Enterprise architecture maturity models — whether ACMM (Architecture Capability Maturity Model), NASCIO’s EA maturity framework, or domain-specific models for cloud security, data governance, or AI readiness — should be used regularly and transparently to assess organizational readiness. The key word is transparently. Maturity assessments that are conducted quietly and filed away produce no improvement. Maturity assessments that are shared with leadership, discussed openly, and used to prioritize investment and capability development produce measurable, sustained progress. Honest assessment is not a sign of weakness; it is the first step toward targeted improvement, and it is a discipline that the enterprise architecture community is uniquely positioned to lead.
Recommendation 8: Champion Continuous Authorization as an Architecture Capability
The ATO-agility paradox will not be resolved by policy reform alone — the statutory and procurement frameworks that govern federal authorization are deeply embedded and slow to change. But enterprise architects can design continuous authorization architectures that satisfy the intent of FISMA and FedRAMP while dramatically compressing the vulnerability window. This means architecting infrastructure-as-code pipelines where security configurations are version-controlled, automatically validated against NIST 800-53 control baselines, and continuously monitored for drift. It means implementing policy-as-code frameworks where compliance rules are executable, testable, and integrated into CI/CD pipelines rather than documented in static spreadsheets reviewed annually. And it means building real-time configuration drift detection that alerts security teams the moment a production environment deviates from its authorized baseline — not six months later during the next assessment cycle.
The architecture community should champion continuous authorization not as a replacement for ATO but as an architectural implementation of ATO’s intent: ensuring that systems operate within their authorized security parameters at all times, not just at the moment of certification. This reframing — from ATO as a gate to ATO as a continuous assurance architecture — is precisely the kind of conceptual shift that enterprise architects are trained to provide and that the panelists at the summit were reaching for but could not articulate.
Recommendation 9: Invest in Government Panelist Readiness and Public Communication Excellence
Government agencies should invest systematically in preparing their technical leaders for public-facing roles at conferences, congressional hearings, industry forums, and media engagements. This investment should include structured briefing preparation protocols — developing visual frameworks, architecture diagrams, and clear narratives before every public appearance. It should include presentation coaching that focuses not on generic public speaking but on communicating architectural thinking with precision and authority. And it should include rehearsal processes where panelists practice fielding difficult questions — like the multi-cloud mastery question that stalled Panel One — with substantive, framework-grounded responses.
The enterprise architecture community can contribute directly to this effort by developing a “conference readiness” competency module within existing government training frameworks. Such a module would cover: how to translate complex architecture concepts into clear visual frameworks; how to prepare structured talking points that demonstrate operational fluency; how to anticipate and prepare for challenging questions; and how to project institutional competence without overpromising. The upskilling of government panelists is an investment in the government’s most visible asset — its people. Every underprepared panel appearance erodes public confidence and reinforces the narrative that government cannot keep pace with the private sector. Every well-prepared appearance reverses that narrative and demonstrates that the federal government possesses — and is developing — the architectural talent that the nation’s security demands.
| # | Recommendation | Core Action | Addresses Signal |
| 1 | Establish Multi-Cloud Reference Architectures | Publish and maintain cross-provider security control mappings, identity federation patterns, and data residency policies | Multi-Cloud Blind Spot |
| 2 | Architect the Data Supply Chain | Implement lineage models, trust boundaries, and provenance verification as architecture artifacts | Data Supply Chain Without Architecture |
| 3 | Integrate Security into Architecture Governance | Embed security as a standing agenda item in architecture review boards with shared decisioning models | Security as Tooling |
| 4 | Adopt AI-Augmented Security Operations | Design human-agent collaboration patterns using the Augmented Architecture Office (AAO) model | AI as Afterthought |
| 5 | Close the Competency Gap | Build structured development programs combining cloud-native skills with EA governance frameworks | The Competency Divide |
| 6 | Create Cross-Cloud Observability | Define log aggregation, metrics normalization, and trace correlation as architecture requirements | Multi-Cloud Blind Spot |
| 7 | Use Maturity Models Transparently | Conduct regular, shared maturity assessments to prioritize investment and capability development | All Signals |
| 8 | Champion Continuous Authorization | Design infrastructure-as-code, policy-as-code, and real-time drift detection to implement ATO as continuous assurance | ATO-Agility Paradox |
| 9 | Invest in Panelist Readiness and Upskilling | Develop conference readiness modules, briefing protocols, and presentation coaching for government technical leaders | Presentation Readiness Gap |
Closing Reflection
The Carahsoft GovExperience Summit demonstrated something important: the federal government is asking the right questions. It is convening the right people. It is creating spaces where the gap between aspiration and operational readiness can be seen clearly — and where that gap can be addressed with intellectual honesty and professional commitment.
The seven maturity signals I observed are not cause for alarm. They are cause for action. They tell us precisely where the enterprise architecture community can add the most value: in providing the reference architectures, governance frameworks, competency development models, and operating model designs that transform scattered good intentions into coherent, governable, and evolvable capabilities. The strong panelists on that second panel proved that this transformation is possible. The others proved that it is necessary.
The enterprise architecture community has both a responsibility and an opportunity in this moment. The responsibility is to engage — not from the sidelines, and not with abstract frameworks that never touch operational reality, but with specific, actionable, and architecturally rigorous guidance that meets government practitioners where they are and helps them get where they need to be. The opportunity is to demonstrate, through that engagement, that enterprise architecture is not overhead, not bureaucracy, not a compliance exercise, but the essential discipline that turns technological potential into organizational capability. The signals from the field are clear: the architecture community’s moment to lead is now.
About the Author
Steve Else, Ph.D., is the Founder and Editor-in-Chief of the Enterprise Architecture Professional Journal (EAPJ.org) and a recognized thought leader in enterprise architecture. His work focuses on the intersection of architecture governance, emerging technology integration, and organizational capability development.
The Carahsoft GovExperience Summit 2026: Advancing Government Service Delivery and CX was held at the Carahsoft Conference and Collaboration Center, 11493 Sunset Hills Road, Reston, Virginia. The summit was organized by Government Executive Media Group and underwritten by Carahsoft Technology Corp., with gold sponsorship from Knox Systems and Salesforce.
© 2026 Enterprise Architecture Professional Journal (EAPJ.org). All rights reserved. This article may be shared with attribution for professional and educational purposes.







