In-Depth Research

2024 State of AI Security Report

Unveiling the numbers and insights behind the prevalence of AI risks in the cloud

2024 State of AI Security Report

Billions of cloud assets scanned, dozens of insights into AI security revealed

The 2024 State of the AI Security Report uncovers AI security risks in actual production environments spanning AWS, Azure, Google Cloud, Oracle Cloud, and Alibaba Cloud.

Orca’s 2024 State of AI Security Report provides valuable insights into how prevalent the OWASP Machine Learning Security Top 10 risks are in actual production environments. By understanding more about the occurrence of these risks, developers and practitioners can better defend their AI models against bad actors. Anyone who cares about AI or ML security will find tremendous value in this study.”

Shain Singh, Project Co-Lead of the OWASP ML Security Top 10

Finding #1: AI development is accelerating

AI adoption rates are high, with most organizations using the technology to develop custom solutions. This requires a substantial investment and strategic commitment to the technology.

Gil Geron, CEO and Co-founder of Orca Security

In many ways, AI is now in the stage reminiscent of where cloud computing was over a decade ago. Fortunately, we’re now more prepared to secure emerging AI technologies and models. Awareness and education play a key role in achieving this goal, which is why we are releasing this inaugural report.”

Gil Geron
CEO and Co-Founder of Orca Security

Get the Full Report

Leveraging data captured between January – August 2024, the 2024 State of AI Security Report analyzes the security of deployed AI models and reveals the top AI risks in cloud services.

Gain important insights into the current and future state of AI security, including:

  • Adoption of AI services, packages, models
  • Vulnerabilities in AI applications
  • Exposed AI models
  • Insecure access
  • Misconfigurations
  • Encryption
  • Challenges in AI security
  • Key recommendations