Upwind Introduces Inside-Out AI Security Within its Unified CNAPP Platform
New AI-native capabilities safe the rising and dynamic assault floor created as enterprises construct complicated AI providers, fashions, brokers, and information flows throughout trendy cloud environments
Upwind, the subsequent technology runtime-first cloud safety chief, at this time introduced the launch of its built-in AI safety suite, increasing the corporate’s CNAPP to guard the quickly rising enterprise AI assault floor. The suite introduces AI real-time safety, AI posture administration, AI brokers, and runtime safety, permitting AI safety to learn from the identical deep cloud context already powering Upwind’s CNAPP, throughout information safety, API safety, identification, and cloud detection and response.
“AI safety shouldn’t be a stand-alone safety part,” stated Amiram Shachar, Founder and CEO of Upwind. “It ought to be half of a bigger ecosystem. It simply makes excellent sense to go down this route and make it possible for AI safety advantages from all the information and context that our CNAPP already holds.”
AI’s Rapid Adoption and the Missing Security Context
AI innovation has accelerated throughout enterprises, however core safety challenges stay unresolved. Models, brokers, inference endpoints, and AI information flows now span a number of providers, frameworks, and infrastructures, but safety groups lack a cohesive strategy to hint AI conduct, validate AI posture, or perceive the true impression of AI-driven choices. This new dynamic assault floor holds dangers that conventional safety approaches can not handle with out shared context and runtime proof.
Securing this technology of cloud and AI workloads which might be ephemeral by nature, requires a special mind-set. It requires an method centered on real-time alerts, APIs, information in movement, and Layer 7 visibility. This is Upwind’s inside-out method to cloud safety.
Inside-out means observing actual visitors, API calls, information flows, and conduct contained in the workload because it runs as a substitute of counting on static configs and snapshots. Inside-out safety relies on actuality, not assumptions.
Upwind’s runtime-first mannequin grounds AI danger in actual exercise and actual alerts, giving safety groups an correct, prioritized image of what’s really taking place in the meanwhile that issues most: runtime.
Upwind’s new AI capabilities give organizations visibility into the place AI is operating, how fashions and brokers behave at runtime, and what delicate information they work together with, addressing one of the vital urgent visibility gaps dealing with safety groups at this time. By extending its runtime-first structure immediately into the AI layer, Upwind brings AI posture, stock, runtime conduct tracing, and vulnerability testing right into a single, unified platform.
“AI is now driving crucial choices throughout trendy methods, but most organizations nonetheless can’t see what their fashions and brokers are literally doing,” stated Amiram Shachar, Founder and CEO of Upwind. “Upwind adjustments that. Real safety begins with actual proof. We introduced runtime readability to cloud workloads, and now we’re doing the identical for AI. This offers groups factual, end-to-end visibility into how their AI behaves in the true world, and that readability is what’s going to outline the subsequent technology of safe AI.”
A Runtime-First Approach to AI Security
Upwind introduces a tightly built-in set of AI safety capabilities that strengthen how organizations handle and monitor AI throughout each layer of the stack:
- AI Security Posture Management (AI-SPM): Secures uncovered inference endpoints, enforces mannequin versioning and governance, tightens overly broad IAM roles, and detects leaked or uncovered AI API keys throughout cloud and picture sources. By correlating posture points with actual runtime exercise, it surfaces the AI configuration dangers that matter most.
- AI Detection & Response (AI-DR): Monitor brokers, MCP and LLM infrastructure for anomalous conduct and jail-break makes an attempt by way of layer 3, 4 and seven evaluation of course of, community exercise, and prompts payloads. This means safety groups can detect malicious AI conduct in actual time and reply based mostly on dwell, evidence-driven alerts.
- AI Bill of Materials (AI-BOM): Maps fashions, frameworks, SDKs, agent methods, and cloud AI merchandise throughout supply code, cloud inventories, and runtime proof to type a complete, real-time stock of AI elements. This offers groups a unified understanding of what AI is operating, the place it lives, and what it is determined by.
- AI Network Visibility: Extends Upwind’s community engine to decode AI-native visitors, together with JSON-RPC, HTTP/2 streaming, and websockets, whereas figuring out outbound calls to OpenAI, AWS Bedrock, Azure OpenAI, and Vertex AI. It detects shadow or unauthorized AI utilization and highlights delicate information shifting by way of prompts and inference payloads. This supplies real-time readability into how AI methods talk and what information leaves the atmosphere.
- MCP Security: Traces the total sequence of AI agent actions, from the preliminary immediate to downstream operate calls, file operations, API interactions, and ensuing system adjustments. Organizations achieve authoritative, runtime-grounded proof of what an agent did, why it acted, and what impression it had.
- AI Security Testing: Extends Upwind’s Attack Surface Management engine to validate AI methods in opposition to adversarial methods such because the OWASP Top 10 for LLMs, immediate injection, jailbreaks, unsafe device bindings, and hallucination-driven information publicity. This ensures AI functions are constantly examined in opposition to real-world assault patterns as they evolve.
Together, these capabilities give enterprises a single, built-in method to managing cloud and AI danger, decreasing operational complexity and enabling safe AI innovation at scale. Explore Upwind’s full platform capabilities at www.upwind.io.
The publish Upwind Introduces Inside-Out AI Security Within its Unified CNAPP Platform first appeared on AI-Tech Park.
