In-Depth Research
2024 State of AI Security Report
Unveiling the numbers and insights behind the prevalence of AI risks in the cloud
Billions of cloud assets scanned, dozens of insights into AI security revealed
The 2024 State of the AI Security Report uncovers AI security risks in actual production environments spanning AWS, Azure, Google Cloud, Oracle Cloud, and Alibaba Cloud.
Orca’s 2024 State of AI Security Report provides valuable insights into how prevalent the OWASP Machine Learning Security Top 10 risks are in actual production environments. By understanding more about the occurrence of these risks, developers and practitioners can better defend their AI models against bad actors. Anyone who cares about AI or ML security will find tremendous value in this study.”
Shain Singh, Project Co-Lead of the OWASP ML Security Top 10
Finding #1: AI development is accelerating
AI adoption rates are high, with most organizations using the technology to develop custom solutions. This requires a substantial investment and strategic commitment to the technology.
56%
of organizations are using AI to develop their own custom applications
Finding #2: Cloud provider default settings are a security concern
Many default settings enhance the speed of AI development over important security considerations, leading to preventable risks. The following are just three of many examples:
27%
of organizations using Azure OpenAI have not configured their accounts with private endpoints
45%
of Amazon SageMaker buckets are using the default bucket naming convention
98%
of organizations using Google Vertex AI have not enabled encryption at rest for their self-managed encryption keys
Finding #3: Vulnerabilities apply to AI, too
Most organizations have deployed AI packages that contain at least one CVE. While most present low to moderate risk, even a single CVE can support a high-severity attack path.
62%
of organizations have deployed an AI package with at least one CVE
In many ways, AI is now in the stage reminiscent of where cloud computing was over a decade ago. Fortunately, we’re now more prepared to secure emerging AI technologies and models. Awareness and education play a key role in achieving this goal, which is why we are releasing this inaugural report.”
Gil Geron
CEO and Co-Founder of Orca Security
Get the Full Report
Leveraging data captured between January – August 2024, the 2024 State of AI Security Report analyzes the security of deployed AI models and reveals the top AI risks in cloud services.
Gain important insights into the current and future state of AI security, including:
- Adoption of AI services, packages, models
- Vulnerabilities in AI applications
- Exposed AI models
- Insecure access
- Misconfigurations
- Encryption
- Challenges in AI security
- Key recommendations