Introducing Harness AI Security: Discovering, Testing, and Protecting AI Applications


If your CISO asked for an inventory of every AI component in your production applications today, could you provide it?
For most organizations, the answer is no. Development teams are racing ahead to build AI functionality into business and customer-facing applications. This transforms application architectures and introduces Large Language Models (LLMs), Model Context Protocol (MCP) servers, autonomous agents, and more. However, the implications for security are often an afterthought.
The handoff between development and security teams has never been perfect. Cloud-native application architectures introduced the problem of shadow APIs. As a security team, you knew the APIs were out there, but nobody told you where. That’s a risk that security teams have been chasing for the last decade, and it’s only gotten worse with time.
The parallels between API and AI adoption are uncanny. The problem of shadow AI is just starting to creep into security teams’ collective peripheral vision. And just as there was a new OWASP Top 10 for API risks, there’s now a new OWASP Top 10 for LLM risks. But this time around, there's an architectural consistency that security teams can count on. While your threat surface is evolving (e.g., with the introduction of prompt injections and more), your attack surface (i.e., the entry points for exploitation) is not; it's simply growing.
Most AI components and services communicate via APIs, especially in distributed or production environments. This means that every new AI client, MCP server, and LLM communicates through API endpoints - something your security teams have had time to understand. While AI threats are different, from an attack surface perspective, AI security is fundamentally an API security problem.
That is why Harness built AI security on our industry-leading API security platform. We understand that protecting AI-native applications requires the same comprehensive approach that's worked for APIs: complete visibility into your attack surface, proactive vulnerability testing, and real-time threat protection. Today, we're introducing three new capabilities in beta - AI discovery, AI testing, and AI protection. These capabilities help you discover every AI component in your application environment, dynamically test those components against evolving AI-specific threats, and protect them against live attacks in production.
AI Discovery: Addressing the Problem of Shadow AI
Just as with API security, AI security starts with discovery. You need to know what AI components you have and where they are before you can assess, understand, and mitigate the risk they present to your organization.
Harness already helps you automatically discover and inventory all of the APIs in your environment. We continually monitor and analyze your API traffic to catch new API endpoints as they’re first deployed, as well as updates to existing endpoints as they occur. And because AI components communicate through APIs, it’s a natural extension to help you recognize those newly deployed AI components as well. For example, MCP communicates through the MCP protocol (or simply Model Context Protocol) based on JSON-RPC 2.0 for messaging.
AI discovery is not just for identifying first-party AI components in your environment, but also calls to third-party Generative AI (GenAI) services like OpenAI, Anthropic, and Google. These services don’t use MCP, but can also be discovered through clients making structured JSON requests to specific REST API endpoints. Both first-party API components and third-party AI tools are added to the AI catalogue, so you have a complete list of everything in your environment at any time, in real time, as soon as the first API call is observed.
Beyond identifying components, AI discovery also has a role in analyzing data flows and assessing runtime risk - for example:
- Lack of encryption or weak authentication on AI APIs
- Unauthenticated APIs calling LLM
- Sensitive data in prompts
- Data policy violations (e.g., regulated data sent to external models)
Increasingly, this inventory isn't optional. Just as PCI DSS 4.0 introduced the requirement to maintain an up-to-date inventory of an organization’s API endpoints, the EU AI Act, ISO 42001, and emerging regulations may soon do the same for AI systems. Preparing for evolving compliance requirements helps AI-forward organizations manage future risk.
AI Testing: Shifting Left to Test for Risk Earlier in the SDLC
Existing DAST tools were built to find traditional web application risks, such as SQL injections and XSS vulnerabilities. They can't test for prompt injection, system prompt leakage, or excessive agency in AI agents. An attacker exploiting prompt injection isn't attacking application code—they're using natural language to manipulate AI behavior, making it ignore instructions, leak information, or take unauthorized actions.
Because of this, AI applications need fundamentally different testing approaches than traditional applications. Effective AI testing will validate both inputs and outputs. An application might resist prompt injection but still generate responses that leak PII, contain toxic language, or violate compliance policies. This dual focus covers the OWASP Top 10 for LLM Applications: prompt injection (LLM01), sensitive information disclosure (LLM02), system prompt leakage (LLM07), excessive agency, and more.
The economics of security say to catch vulnerabilities early. Finding a prompt injection during development costs an afternoon of engineering time. Finding it in production after a data leak costs legal fees, regulatory fines, and brand damage.
That's why AI testing needs to integrate into CI/CD pipelines. Automated testing on every commit gives developers immediate feedback, letting them fix AI-specific vulnerabilities before code ships. And by integrating with continuous AI discovery, you can test every AI component in your environment, including previously unknown shadow AI.
AI Protection: Protecting against Prompt Injections and More
Discovery tells you what you have. Testing tells you where the risks are and helps you shift AI security left and mitigate risks before deploying applications in production. However, production environments will always need an active defense against live attacks.
AI threats are evolving fast, with new exploits being discovered daily. You need protection that adapts to changing threat patterns and understands request context. Because AI interactions flow through APIs, you can apply behavioral analysis techniques that work for API security—establish normal traffic baselines and detect anomalies that might indicate attacks no, even novel techniques you've never seen.
A prompt injection attempt might look like a normal user query on the surface. Only by understanding the deep runtime context—how the request interacts with AI components, what data it accesses, what response it generates—can you distinguish legitimate use from malicious intent.
Real-time protection must address the full spectrum of AI-specific risks: detecting and blocking prompt injection before it reaches your LLMs, identifying when AI applications generate improper responses that leak sensitive data, and preventing misuse of excessive agency where AI agents attempt unauthorized actions.
But AI protection can't exist in a vacuum. Most organizations already have WAFs, SIEM systems, and SOAR platforms. AI protection should enhance these investments—automatically creating WAF rules based on observed threats, feeding threat intelligence into your SIEM, enabling coordinated response across your security stack.
Getting Started with AI Security
Harness’ integrated approach to AI security helps ensure that you’re testing and protecting not just the AI components you know about (which can be very few) but also the ones you don’t. Starting with AI discovery and inventory gives you confidence that you have the full picture of your AI risk across your growing attack surface and evolving threat surface.
If your CISO asked for an inventory of every AI component in your production applications today, you could not only provide it but also an assessment of your overall AI risk and a means to protect your AI components against active threats in production.
All three capabilities—AI discovery, AI testing, and AI protection—are available today in beta and built on the same API security platform you already have. If your developers are building AI-native applications, and you don’t know where they are, contact your account manager today and ask how we can help.