<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Agentic Enterprise]]></title><description><![CDATA[The Agentic Enterprise covers how Enterprise AI is actually built, adopted, and scaled inside real organizations.]]></description><link>https://kimura.yumiwillems.com</link><generator>Substack</generator><lastBuildDate>Sun, 03 May 2026 03:27:28 GMT</lastBuildDate><atom:link href="https://kimura.yumiwillems.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Yumi Kimura]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[yumiwk@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[yumiwk@substack.com]]></itunes:email><itunes:name><![CDATA[Yumi W. Kimura]]></itunes:name></itunes:owner><itunes:author><![CDATA[Yumi W. Kimura]]></itunes:author><googleplay:owner><![CDATA[yumiwk@substack.com]]></googleplay:owner><googleplay:email><![CDATA[yumiwk@substack.com]]></googleplay:email><googleplay:author><![CDATA[Yumi W. Kimura]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Human as Context: Why Enterprise AI Needs More Than Documents]]></title><description><![CDATA[This is not the official second piece of this newsletter.]]></description><link>https://kimura.yumiwillems.com/p/human-as-context-why-enterprise-ai</link><guid isPermaLink="false">https://kimura.yumiwillems.com/p/human-as-context-why-enterprise-ai</guid><dc:creator><![CDATA[Yumi W. Kimura]]></dc:creator><pubDate>Sat, 04 Apr 2026 02:44:45 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/193132463/518281ca1b01adf5afb5af827e5dbd3a.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>This is not the official second piece of this newsletter. Before I move into the next essay, I wanted to briefly return to something I touched on in the last post but did not fully unpack.</p><p>Previously, I wrote about the missing context layer in enterprise AI: why systems that can read documents and generate answers still struggle inside real organizations. The issue is not only information access. It is the lack of understanding around how work actually happens: who is trusted, who holds real influence, how decisions move, and where human judgment still matters.</p><p>This time, I want to focus more directly on the People layer. To me, this is where a large part of enterprise intelligence actually lives. Not just in content, but in relationships, expertise recognition, informal authority, and the social patterns that shape execution.</p><p>I decided to record this as a quick 7-minute podcast because some people prefer listening to ideas rather than reading them. I also wanted to see whether there is appetite for hearing more of this topic in audio form. :)</p><p>If this resonates, leave a thought or question. I would love to know what part of this feels most relevant, most controversial, or most worth unpacking further. I may address some of those directly in the official second piece of the newsletter.</p><p></p>]]></content:encoded></item><item><title><![CDATA[The Missing Layer Between AI Pilots and Enterprise Scale]]></title><description><![CDATA[Models can read the documents. They still need the context to navigate the real organization.]]></description><link>https://kimura.yumiwillems.com/p/the-missing-layer-between-ai-pilots</link><guid isPermaLink="false">https://kimura.yumiwillems.com/p/the-missing-layer-between-ai-pilots</guid><dc:creator><![CDATA[Yumi W. Kimura]]></dc:creator><pubDate>Wed, 18 Mar 2026 08:01:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2oPj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Enterprise AI is finally moving from experiments to production. But as models, tools, and agents spread through organizations, one gap keeps showing up: the AI still does not understand how the organization actually works.</p><p><a href="https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html">Deloitte&#8217;s 2026 enterprise AI report</a> captures the moment well. AI access is rising, more experiments are reaching production, and agentic adoption is accelerating. But activation still lags, and governance remains immature.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://kimura.yumiwillems.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Agentic Enterprise! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>Enterprise AI is scaling, but not cleanly.</strong></p><p>Access is rising: worker access to sanctioned AI tools grew from under 40% to under 60% in a year. Activation is lagging: among workers with access, fewer than 60% use AI in their daily workflow. Production is still limited: only 25% of respondents say 40% or more of their AI experiments are in production today, though 54% expect to reach that level within three to six months. Agentic adoption is coming fast: 74% of companies plan to use agentic AI at least moderately within two years. Governance is behind: only 21% report having a mature governance model for autonomous agents.</p><p>That pattern matters because the next phase of enterprise AI is not about whether models work. It is about whether AI can function reliably inside real organizations.</p><p>A pilot can look impressive in a clean environment. It runs with a small team, scoped data, limited stakeholders, and fewer consequences. Production is different. Production requires integration with existing systems, security reviews, compliance checks, monitoring, maintenance, and ongoing operational ownership. It also exposes the realities pilots can hide: edge cases, coordination problems, conflicting priorities, and the harder work of scaling what succeeded in isolation. Deloitte calls this the proof-of-concept trap.</p><p>I would add one more reason pilots stall: organizational context.</p><p>Most enterprise AI systems today are built on two layers of data.</p><p><strong>Tier 1: Structural</strong> &#8212; org charts, titles, reporting lines<br><strong>Tier 2: Transactional</strong> &#8212; documents, tickets, messages, meeting notes</p><p>These layers matter. They tell AI what the official organization looks like and what information has been recorded. But they do not tell AI how decisions actually move in practice.</p><p>That missing third layer is behavioral context: who people trust, who really makes the call, where work actually gets escalated, whose approval matters under pressure, which workflows exist on paper versus in reality, and when someone may be formally responsible but practically unavailable.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2oPj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2oPj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png 424w, https://substackcdn.com/image/fetch/$s_!2oPj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png 848w, https://substackcdn.com/image/fetch/$s_!2oPj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png 1272w, https://substackcdn.com/image/fetch/$s_!2oPj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2oPj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png" width="1456" height="779" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:779,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1066676,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://yumiwk.substack.com/i/191342377?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!2oPj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png 424w, https://substackcdn.com/image/fetch/$s_!2oPj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png 848w, https://substackcdn.com/image/fetch/$s_!2oPj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png 1272w, https://substackcdn.com/image/fetch/$s_!2oPj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7602d542-3df9-4374-a552-206c2a7cf636_1588x850.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A recent essay from <a href="http://www.lead.app">LEAD</a>&#8217;s <a href="http://www.behaviorgraph.com">BehaviorGraph</a> project makes this distinction explicitly: most enterprise AI categories still operate on structural and transactional data, while real decisions often live in a third behavioral tier. That is why so many systems are technically correct and operationally wrong.</p><p>An AI agent retrieves the right policy document. But the real decision path shifted six months ago because a trusted legal counsel or staff engineer became the true checkpoint.</p><p>An AI routes a request to the right person by title. But that person is overloaded, politically peripheral, or no longer the one others actually defer to.</p><p>A drafting tool proposes the right message. But the account is sensitive, and the outreach needs to go through a specific internal sponsor.</p><p>In each case, the content may be correct. The action is still wrong.</p><p>That is the kind of failure the next generation of enterprise AI has to solve. Not just whether the system can retrieve, summarize, generate, or classify. But whether it can operate with enough awareness of human dynamics, decision paths, and organizational reality to act appropriately.</p><p>My research calls this the <strong><a href="https://doi.org/10.7916/9da8-c532">Organizational Intelligence Loop</a></strong>, or <strong>OIL</strong>: a framework for what enterprise AI needs in order to operate inside a real organization.</p><ul><li><p><strong>People</strong> &#8212; who knows what, who is trusted, who influences outcomes</p></li><li><p><strong>Information</strong> &#8212; what is current, owned, validated, and permissioned</p></li><li><p><strong>Process</strong> &#8212; how work actually flows, where bottlenecks form, where decisions stall</p></li><li><p><strong>Agentic AI Design</strong> &#8212; what an agent is allowed to do, when it should escalate, and how governance is embedded into action</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zk-J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac522b99-ef20-4d5d-b614-563e021d7fe8_1592x872.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zk-J!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac522b99-ef20-4d5d-b614-563e021d7fe8_1592x872.png 424w, https://substackcdn.com/image/fetch/$s_!zk-J!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac522b99-ef20-4d5d-b614-563e021d7fe8_1592x872.png 848w, https://substackcdn.com/image/fetch/$s_!zk-J!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac522b99-ef20-4d5d-b614-563e021d7fe8_1592x872.png 1272w, https://substackcdn.com/image/fetch/$s_!zk-J!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac522b99-ef20-4d5d-b614-563e021d7fe8_1592x872.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zk-J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac522b99-ef20-4d5d-b614-563e021d7fe8_1592x872.png" width="1456" height="798" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ac522b99-ef20-4d5d-b614-563e021d7fe8_1592x872.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:798,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1565252,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://yumiwk.substack.com/i/191342377?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac522b99-ef20-4d5d-b614-563e021d7fe8_1592x872.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zk-J!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac522b99-ef20-4d5d-b614-563e021d7fe8_1592x872.png 424w, https://substackcdn.com/image/fetch/$s_!zk-J!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac522b99-ef20-4d5d-b614-563e021d7fe8_1592x872.png 848w, https://substackcdn.com/image/fetch/$s_!zk-J!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac522b99-ef20-4d5d-b614-563e021d7fe8_1592x872.png 1272w, https://substackcdn.com/image/fetch/$s_!zk-J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac522b99-ef20-4d5d-b614-563e021d7fe8_1592x872.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The OIL Framework: What enterprise AI needs to operate in the real organization. Most current systems are stronger on Information and Process than on People context and agentic guardrails.</em></figcaption></figure></div><p>The point is not that enterprise AI lacks intelligence in the abstract. The point is that most systems still lack the right operating context.</p><p>Information alone is not enough. A system can retrieve the most relevant document and still fail because the document is outdated in practice. Process alone is not enough. A workflow can be mapped correctly on paper and still fail because it misses the informal checkpoint that actually determines whether work moves forward. Even governance alone is not enough if it is defined only as policy after the fact rather than situational judgment at the moment of action.</p><p>This matters even more as agentic AI scales. Deloitte reports that 74% of companies expect to use agentic AI at least moderately within two years, yet only 21% say they already have mature governance for autonomous agents. That gap is serious because agents do not merely recommend actions. They can take them directly. They can route work, trigger workflows, escalate issues, make updates, and interact with systems at speed.</p><p>I agree with the concern, but I would push the argument further. Governance is not only about what happens after an agent acts. It is also about whether the agent had enough organizational context to act correctly in the first place.</p><p>A registry of agents is not the same as a behavioral governance layer. A policy is not the same as a live authority map. A correct answer is not useful if it is routed to the wrong person, exposed to the wrong audience, or executed in the wrong sequence.</p><p>This is where many discussions of enterprise AI remain too narrow. They focus on model quality, knowledge retrieval, prompt engineering, or tool integration. All of those matter. But once AI begins to operate across real business environments, another question becomes unavoidable: does the system understand how the organization functions as a living system, not just as a collection of files and formal roles?</p><p>That question becomes especially important when work is ambiguous, political, or cross-functional. In those settings, success is rarely determined by content alone. It depends on timing, trust, influence, permission, overload, sequencing, and informal authority. Those are not edge issues. They are often the difference between adoption and resistance, execution and delay, correctness and failure.</p><p><strong>This is also why Organizational Network Analysis matters again, but in a new form. </strong>Historically, ONA was often periodic, retrospective, and consulting-heavy. It was useful for diagnosing hidden influence or collaboration breakdowns, but it was rarely built into daily operations. What enterprise AI needs now is not a one-time map of collaboration. It needs a continuous layer that can detect trust, influence, overload, escalation paths, and decision patterns as the organization changes.</p><p>The real leap is treating ONA as continuous infrastructure rather than a one-time deliverable, turning behavioral signals into live, queryable context for AI systems.</p><p>That shift matters because enterprises do not stand still. Teams reorganize. Decision-makers change. Experts become overloaded. Informal power shifts. New initiatives create temporary hubs of influence. Legacy processes linger long after they stop reflecting how work really gets done. If AI is expected to operate inside this environment, it cannot rely only on static maps and historical documents. It needs a more dynamic understanding of the organization it is acting within.</p><p>In that sense, the next enterprise AI category may not simply be better copilots or more agent actions. It may be the infrastructure that makes organizational reality queryable at runtime.</p><p>Not just what the company says it is.<br>Not just what its documents record.<br>But how it actually functions.</p><p>That is also why I think the enterprise AI market is still missing an important category. Search platforms help organizations retrieve information. Foundation models help them reason across language. Workflow tools help automate process. HR and people analytics tools help analyze talent and engagement. But there is still a gap between these categories: the live organizational context layer that helps AI understand who matters, what is current, how decisions move, when escalation is needed, and where formal process diverges from practical reality.</p><p>Without that layer, AI can still be useful. But it will struggle to become reliable infrastructure for enterprise execution.</p><p>Deloitte&#8217;s report ultimately lands in a similar place from a different direction. It argues that organizations need to close the gap between access and activation, redesign work around AI, build governance before scale, and treat AI as foundational to how the organization operates. I agree. But I would add that redesigning work around AI requires redesigning AI around the organization as it actually behaves.</p><p>That is a different challenge from simply deploying more tools. It is not just a product question. It is an organizational intelligence question.</p><p>The companies that solve this well will not necessarily be the ones with the most pilots, the most aggressive branding, or the fastest rollouts. They will be the ones that understand that enterprise AI is not only a technical layer. <strong>It is an operating layer. </strong>And operating layers fail when they cannot read the real environment they are supposed to work inside.</p><p>That is the territory this newsletter will cover.</p><p>Not AI as theater.<br>Not AI as demo.<br>But AI as it is actually built, adopted, governed, and scaled inside real organizations.</p><p>Because the hardest part of enterprise AI is no longer getting a model to work.</p><p>It is getting the organization to work with it.</p><p></p><p>Next issue: OIL Dimension 1 &#8212; People. How can enterprise AI begin to understand trust, influence, and informal authority without relying on a static org chart or a six-month survey?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://kimura.yumiwillems.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Agentic Enterprise! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>