Codacy has launched a free AI Coding Risk Assessment to help engineering organizations measure the security and compliance posture of their AI-assisted development workflows and benchmark it against the industry.
LISBON, Portugal, Nov. 3, 2025 /PRNewswire-PRWeb/ -- Codacy, a leading platform for end-to-end AppSec and Code Quality automation, today launched the AI Coding Risk Assessment, a self-assessment survey that helps engineering teams measure and benchmark the security posture of their AI-assisted development workflows (using tools like GitHub Copilot, Cursor, or Claude).
The initiative comes as organizations struggle to reconcile the speed of generative AI with the complex risks associated with untrusted, machine-generated code and increasing regulatory scrutiny globally. The survey, composed of 24 targeted questions, aims to provide the first comprehensive, anonymous data set on how teams are mitigating risk, covering:
- Policy and Governance;
- Security and Risk Management;
- Culture and Training.
Unlike generalized "state of" reports, the resulting data is immediately personalized. By contributing, every respondent receives a tailored industry benchmark that allows them to see exactly how their company's practices compare to others in the industry, alongside an AI Governance and Security checklist to address gaps.
"After speaking with leading AI industry figures, including the teams behind Microsoft's Copilot, Lovable and Windsurf, we observed a need for a unified, data-backed resource," said Jaime Jorge, CEO and Co-founder of Codacy. "That's why we created this benchmark. It helps companies identify where they stand, compare themselves to the market, and take concrete, actionable steps to leverage AI at scale."
To participate in the survey and access the AI Governance Checklist and benchmark data, visit: https://ai-risk.codacy.com/
About Codacy:
Codacy is a leading platform for end-to-end AppSec and Code Quality automation, supporting 15,000 organizations and 200,000 developers worldwide. Codacy's proprietary IDE plugin, Guardrails, automatically repairs security and quality violations in AI-generated code before it is even viewed by the user, allowing organizations to enforce compliance from the moment of code inception.
Media Contact
Mark Raihlin, Codacy, 351 +351965914953, [email protected], codacy.com
SOURCE Codacy
Share this article