
The data indicates that while AI is prolific, it is not always precise. The 15-18% increase in security vulnerabilities per line of code suggests that AI models are occasionally hallucinating insecure patterns or utilizing outdated libraries. Furthermore, the rise in code duplication points to a "copy-paste" culture where developers might accept AI suggestions without refactoring them for modularity.
Perhaps the most human insight from the report is the "Trust Deficit." Despite the speed at which code is produced, AI-generated pull requests sit in the review queue 4.6 times longer than human-written code.
This lag suggests a psychological barrier: senior developers and peer reviewers are exercising extreme caution, double-checking AI logic more rigorously than they would a colleague's work. This validation bottleneck threatens to erode the velocity gains made during the coding phase. To combat this, Opsera suggests that enterprises must invest in better automated testing and governance tools that can pre-validate AI code before it reaches human reviewers.
In the battle for tool supremacy, GitHub Copilot remains the undisputed king, commanding a 60-65% market share. However, the landscape is fragmenting. The report notes a rising influence of "agentic" tools and IDE-native assistants that promise more autonomy than simple code completion.
Adoption is also not uniform across industries. While technology and startup sectors are pushing the 90% saturation point, highly regulated industries like Saúde e Seguros lag by 9-12 percentage points. In these sectors, strict compliance requirements and data privacy concerns act as a brake on rapid AI integration.
A surprising finding for CFOs and CIOs is the inefficiency in spending. The report identifies that approximately 21% of AI tool licenses are underutilized. In large enterprises, this translates to millions of dollars in "shelfware"—subscriptions that are paid for but rarely used to their full potential.
This underutilization often stems from a lack of proper onboarding. Developers are given access to powerful tools but lack the specific training on how to integrate them into their daily workflows effectively. Opsera emphasizes that purchasing the tool is only step one; enabling the workforce is where the ROI is realized.
Looking ahead, the report predicts that the definition of an "AI Coding Assistant" will evolve. We are moving away from simple autocomplete functions toward IA agentiva (agentic AI)—systems capable of reasoning, planning, and executing complex multi-step tasks.
For DevOps teams, this means the future will likely involve managing fleets of AI agents that not only write code but also configure environments, run tests, and remediate security alerts autonomously. As we move deeper into 2026, the competitive advantage will belong to organizations that can govern these agents effectively, balancing the need for speed with the non-negotiable demands of security and quality.
For now, the message is clear: AI is here, it is fast, but it requires a steady hand at the wheel. The focus for the coming year must shift from adoption to optimization.