How to turn shadow AI into a safe agentic workforce: Lessons from Barndoor AI
At most enterprises proper now, AI adoption has a unusual break up persona.
On the floor, there are the sanctioned initiatives: a strategic LLM partnership, a vendor-led pilot, and a few fastidiously worded coverage paperwork. Underneath, there’s the true exercise. Engineers quietly join AI purposes to enterprise information and SaaS instruments utilizing the mannequin context protocol (MCP).
Employees are experimenting with open-source instruments and their favourite AI purchasers at dwelling as a result of they’ll’t get them previous safety. Teams use “unofficial” AI purposes to get work achieved sooner, then copy the outcomes into accepted programs.
If you pay attention to a lot of enterprises, the story is all the time the identical:
“We aren’t seeing the ROI.” “Our firm is nervous about safety.” “We can’t determine what enterprise downside to clear up first.”
But as one of many panellists in our Boston summit stated, if the buyer is all the time the issue, you’re most likely the issue.
There’s a deeper subject at play right here. It’s not that enterprises aren’t excited by agentic AI. It’s that they’re being requested to undertake it on infrastructure and governance fashions that had been by no means designed for autonomous programs within the first place.
That context is strictly why Barndoor AI exists. Founded by Oren Michels, Barndoor focuses on enabling organizations to undertake agentic programs with out shedding management of safety, compliance, or operational integrity.
Rather than asking corporations to bolt AI onto legacy environments, Barndoor rebuilds the foundations, offering the belief layer enterprises want to use autonomous brokers responsibly and at scale.
In our panel with Barndoor on the Generative AI Summit in Boston, that includes Oren Michels, Co-Founder & CEO of Barndoor.ai, and Quentin Hardy, Principal at LGTM LLC, that stress got here by time and again. What emerged was not one other “AI is the longer term” story, however one thing extra grounded:
- AI is already right here, usually in unsanctioned methods.
- The actual bottleneck isn’t mannequin functionality. It’s management, visibility, and belief.
- And if we don’t clear up that, we’ll find yourself with a lot of exercise, however not a lot of sturdy worth.
This article is about that hole and why Barndoor has chosen to shut it.
The actual adoption downside: AI with out a enterprise centre of gravity
The business proper now’s extremely “buzzword compliant.”
LLMs, frontier fashions, RAG, MCP servers, agentic programs – we have now an alphabet soup of applied sciences and protocols. But only a few of these phrases inform a CFO, a Head of Ops, or a CIO what downside is definitely being solved, what the brand new workflow appears to be like like, or what it should price to run in manufacturing.
That disconnect confirmed up repeatedly in Boston.
On one facet, you might have hyperscalers and huge software program distributors promoting more and more giant bundles of “AI platform + orchestration + consulting.” They are racing to personal the orchestration layer as a result of, because the panelist dryly famous, “whenever you management the orchestration, you management the income.”
On the opposite facet, you might have groups inside enterprises that don’t want one other large platform.
They want:
- A safe method to let individuals experiment on the grassroots degree.
- A method to cease AI from doing damaging issues.
- A method to see which experiments are literally working, to allow them to unfold these patterns.
What they’ve as an alternative is a alternative between:
- Locking AI down so onerous that it turns into ineffective, or
- Turning a blind eye whereas “shadow AI” grows outdoors governance.
Neither of these ends effectively.

Shadow AI is the brand new BYOD
If this feels acquainted, it’s as a result of we’ve seen this film earlier than.
The panel drew a direct line again to the early cellular and cloud eras. BlackBerry and early smartphones weren’t adopted as a result of IT blessed them in a technique doc. They had been adopted as a result of gross sales groups purchased them with their very own budgets, used them to shut offers sooner, and compelled the group to catch up.
AWS snuck in by facet doorways when servers acquired low-cost sufficient for engineers to expense them. One giant firm famously miscounted its server fleet by 100,000 machines as a result of a lot of the infrastructure had been acquired regionally, not centrally.
The identical sample is going on with AI and MCP:
- It’s trivial for a developer or energy consumer to obtain an open-source MCP server and join it to Claude or Cursor.
- Many of these MCP servers are spun up simply to present a staff is “AI compliant” – then sit unmaintained, with no visibility, rising metaphorical cobwebs.
- Even in extremely regulated environments, individuals check AI workflows from dwelling first as a result of they’ll’t get entry to the fitting infrastructure contained in the firewall.
Officially, the group is transferring cautiously. Unofficially, AI is already threaded by workflows in ways in which safety and IT can’t see.
This is just not a momentary part. It’s the default behaviour of formidable individuals who need to get extra achieved. If your greatest individuals aren’t doing this, you most likely have a completely different downside.
The query is just not “How can we cease this?” The query is: How can we turn this into one thing safe, seen, and scalable?
Why the AI chat interface leaves many of the firm behind
There’s one other structural downside that got here up on the panel.
Most of the high-profile success tales in AI immediately are developer-centric. Coding copilots make complete sense as a chat interface as a result of software program engineers already work by speaking to advanced programs: compilers, debuggers, and senior colleagues.
It doesn’t make sense for groups outdoors of engineering.
Sales, operations, finance, compliance – these jobs will not be constructed round “asking a PhD-level intelligence for recommendations.” They are constructed round processes, programs of report, and long-lived workflows.
AI that solely exists in a chat window will stay caught in “suggestion mode” for these groups:
- It can floor documentation.
- It can summarise info.
- It can generate content material.
But it nonetheless expects a human to take the ultimate motion in Salesforce, replace the ticket, change the entitlement, or shut the loop.
That’s helpful, however it’s not agentic.
On stage, Oren Michels provided a sharper definition: agentic AI is just not about suggesting what a human ought to do subsequent. It is in regards to the AI really doing the factor – with the fitting guardrails, on the fitting programs, underneath the fitting identification, and with visibility
That’s the place each the chance and the chance explode.

Agentic AI as “enthusiastic interns”
One of the extra memorable metaphors from the session was this:
Think of your AI brokers as very enthusiastic interns.
They are keen. They are quick. In many instances, they’re surprisingly succesful. But they lack context. They don’t perceive your tradition, your historical past with a buyer, or the subtleties of your regulatory atmosphere. If you give them entry to the whole lot on day one, you might be setting them – and your self – up to fail.
With human interns, we intuitively perceive this. You carry somebody in. You:
- Give them restricted entry to programs.
- Ask them to full particular duties.
- Watch how they go about it.
- Increase their scope as they show judgment and reliability.
If they deal with delicate info poorly or break the method, you pull them again, coach them, and reassess.
Agentic AI wants the identical sample – however encoded into the infrastructure, not left to casual norms.
This is the house Barndoor needs to create options in: governing what AI brokers can see and do throughout MCP, programs of report, and enterprise workflows, with the identical seriousness we apply to human identification and entry administration.
Why governance isn’t a brake – it’s the way you get to motion and scale
Governance has a status downside.
Inside a lot of organisations, it’s seen because the staff that claims “no” – the group that seems late within the sport with redlines, necessary opinions, and a lengthy record of controls that assume the worst.
The panel took a completely different view: governance is the way you allow development with out shedding management.
Before we get into the dangers, it’s necessary to be clear about what’s really occurring inside MCP. Every MCP server exposes a set of “instruments”; the particular actions an AI agent can take, equivalent to updating information, fetching information, modifying permissions, or initiating workflows.
These device calls are what make agentic AI highly effective, however they’re additionally what make it dangerous: with out the fitting guardrails, an AI agent can take actions a human would by no means be allowed to.
Without correct AI controls over MCP and the instruments brokers can name, you shortly find yourself with precisely the dangers enterprises worry most:
- Shadow AI: unsanctioned apps bypassing safety.
- Data leaks: delicate info being despatched to locations it shouldn’t be.
- Unrestricted entry: over-permissioned brokers modifying or deleting important enterprise information.
You can strive to lock all of this down with coverage PDFs, firewalls, and “don’t use” bulletins. Or you possibly can settle for that individuals will maintain experimenting, and provides them a construction the place:
- Only accepted MCP servers and instruments can be found.
- Every AI app and agent connects by a secured gateway.
- Access is outlined on the degree of consumer, position, system, and motion – not simply “on” or “off.”
- Every name is logged and visual, so you possibly can see each the success tales and the actions your insurance policies blocked.
That’s successfully what Barndoor is positioning as: the management aircraft for the agentic enterprise, a place the place you possibly can centralise visibility and coverage for AI brokers throughout your workers and enterprise information.
It’s not about slowing individuals down. It’s about ensuring their creativity doesn’t outpace the organisation’s means to handle danger.
Why governance is a development enabler for AI
Strong guardrails don’t kill experimentation. They make it safe to transfer sooner. Barndoor calls this “the management aircraft for the agentic enterprise”, governance that unlocks, moderately than blocks, AI adoption.

From hidden wins to repeatable success
One of essentially the most helpful factors within the panel was delicate however necessary: governance isn’t nearly catching dangerous issues. It’s about discovering good issues.
If you don’t have any visibility into MCP site visitors, you don’t simply miss safety points. You additionally miss:
- The engineer who quietly automated a tedious reconciliation workflow.
- The assist staff that wired an agent to resolve sure ticket sorts end-to-end.
- The operations supervisor who constructed an AI-driven scheduling workflow that saved hours every week.
In a world with out a management aircraft, these wins keep native. They reside in non-public repos, private workflows, and small groups. They by no means turn into organisational patterns.
With a correct governance and observability layer, you possibly can:
- See which AI workflows are rising.
- Quantify their impression.
- Turn them into reusable patterns for different groups.
- Learn from failures simply as intentionally as you study from successes.
This is the place Barndoor’s concentrate on “visibility, accountability, and governance” turns into non-negotiable. It’s not making an attempt to orchestrate each agent. I-information what’s already occurring, so enterprises can transfer from remoted experiments to a real getting actual worth out of agentic AI.
What this implies in order for you to be the “AI hero” in your organization
The panel ended with a easy problem to the viewers: in order for you to be a hero inside your organisation, you might have to play either side.
You have to acknowledge that your colleagues are already utilizing AI, generally in ways in which make your safety staff nervous. And you might have to assist design a path the place:
- Experimentation is inspired, not punished.
- Failure is handled as studying, not as a cause to shut issues down.
- Governance is baked into the plumbing, not bolted on on the finish.
- AI brokers are handled like interns: restricted at first, then progressively trusted as they show themselves.
That’s not a position that belongs solely to distributors or solely to inner groups. It’s a partnership.
Barndoor’s guess is that enterprises will want a devoted management aircraft for this – one thing that’s constructed for AI brokers, MCP connections, and sophisticated insurance policies deeply sufficient to be greater than “identification, however with a new coat of paint.”
Whether or not you undertake Barndoor particularly, the underlying concept is difficult to ignore:
If we wish AI brokers to cease dwelling within the shadows and begin doing actual work at scale, we want to give them the identical type of structured, observable atmosphere we give human employees, however constructed for AI. Granular permissions. Training wheels. Feedback loops. Visibility.
The corporations that break that win would be the ones that deal with governance not as a gate, however because the infrastructure that makes agentic AI genuinely safe, accountable, and transformative.



