Anthropic's Claude Code Source Code Leaked on GitHub, Thousands of Repos Taken Down
Anthropic accidentally exposed Claude Code's 512,000-line source code; the company took down thousands of GitHub repos in a controversial DMCA sweep.
Anthropic accidentally exposed Claude Code's 512,000-line source code; the company took down thousands of GitHub repos in a controversial DMCA sweep.
Popular AI gateway startup LiteLLM has publicly severed ties with compliance vendor Delve following a credential-stealing malware incident and whistleblower allegations that Delve fabricated compliance audit data.
At RSA Conference 2026, CrowdStrike, Cisco, Palo Alto Networks, Microsoft, and Cato CTRL each unveiled AI agent identity frameworks, yet real-world Fortune 50 incidents revealed three unresolved gaps in agentic AI security.
Oasis Security researchers discovered three chained flaws in Anthropic's Claude — including a prompt injection, Files API exfiltration path, and open redirect — enabling silent data theft through a Google Search ad.
Security researchers demonstrated that an autonomous AI agent successfully compromised McKinsey's internal AI system in less than two hours by exploiting prompt injection—a well-known but still widely unmitigated attack vector—raising urgent concerns about enterprise AI security.
OpenAI has announced the acquisition of Promptfoo, an open-source AI security and red-teaming startup, to bolster the safety and reliability of its AI agents against adversarial attacks and prompt injection vulnerabilities.
Anthropic publicly accused Chinese AI laboratories of systematically extracting knowledge from its Claude models through distillation attacks, releasing new detection and prevention research as the US debates AI chip export controls.
Microsoft Copilot bypassed DLP policies and sensitivity labels twice in eight months — including a four-week exposure affecting the UK's NHS — revealing a systemic blind spot in enterprise AI security stacks.
Anthropic has released Claude Code Security, a new AI-powered application security tool that scans codebases for complex vulnerabilities using human-like reasoning, sending cybersecurity stocks tumbling on the news.
Cybersecurity experts warn Moltbook, a social network for AI agents, poses prompt injection risks that could compromise thousands of agents simultaneously.
UF scientists create HMNS method to test AI safety measures, successfully bypassing Meta and Microsoft systems to identify security vulnerabilities.
OpenAI's latest AI model demonstrates alarming capability to drain cryptocurrency wallets, successfully exploiting vulnerable smart contracts in 72% of tests.
Treasury Department releases six resources to strengthen AI security and risk management across financial sector through AIEOG partnership.
Microsoft confirms a critical bug allowed Copilot AI to summarize confidential emails since January, bypassing data loss prevention policies in Microsoft 365.
Gartner warns 57% of employees use personal GenAI for work as autonomous AI agents and post-quantum cryptography threats reshape cybersecurity landscape.
Chinese state-backed hacking group APT31 leveraged Google's Gemini AI to automate vulnerability analysis and plan cyberattacks against US targets, marking a significant escalation in AI-powered cyber warfare.
University of Regina researchers have enhanced the CIPHER disinformation detection tool with AI capabilities to combat false narratives targeting Canadians. The system analyzes Russian propaganda campaigns and is expanding to decode Chinese-language disinformation.
Google reports commercially motivated actors conducted distillation attacks on Gemini with over 100,000 prompts to extract AI model capabilities and intellectual property.
APT groups from China, North Korea, and Iran use Google Gemini for reconnaissance, malware coding, and phishing campaigns, Google GTIG reveals.
Groundbreaking research exposes the industrial-scale proliferation of deepfake fraud, highlighting urgent cybersecurity threats posed by AI-generated synthetic media.