aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,020
[LAST_24H]
7
[LAST_7D]
186
Daily BriefingSaturday, April 11, 2026
>

Anthropic's Claude Code Dominates Enterprise AI Market: At the HumanX AI conference, Anthropic's coding agent (a tool that autonomously generates, edits, and reviews code) has eclipsed OpenAI as the focal point among executives and investors, generating over $2.5 billion in annualized revenue since its May 2025 public launch. The company's concentrated strategy on code generation, rather than diversifying across multiple AI capabilities, is securing significant enterprise adoption despite ongoing legal disputes with the Department of Defense.

>

Spotify Combats AI-Generated Impersonation at Scale: AI bots are uploading fabricated music to Spotify under the identities of legitimate artists, including high-profile musicians like Jason Moran and Drake, prompting the platform to remove over 75 million fraudulent tracks in the past year. Spotify is developing a pre-publication verification tool that will require artist approval before releases appear under their names, addressing a growing identity spoofing problem in content platforms.

Latest Intel

page 1/302
VIEW ALL
01

‘It has your name on it, but I don’t think it’s you’: how AI is impersonating musicians on Spotify

securitysafety
Critical This Week5 issues
critical

GHSA-8x8f-54wf-vv92: PraisonAI Browser Server allows unauthenticated WebSocket clients to hijack connected extension sessions

GitHub Advisory DatabaseApr 10, 2026
Apr 10, 2026
Apr 11, 2026

AI bots are creating fake music and uploading it to Spotify under the names of real musicians, including famous artists like jazz pianist Jason Moran and rapper Drake. Spotify has acknowledged the problem, removing over 75 million spammy tracks in 12 months, and says it is developing a new tool that will let artists review and approve releases before they go live on the platform.

Fix: Spotify stated it is 'working on a new tool to give artists more control over what shows up under their name' that would 'let artists review and then approve or decline releases before they go live on the platform.' The company also said that 'estate or rights holders for a deceased artist can opt into the company's new tool if they have an account.' Additionally, Spotify noted it 'employs a range of safeguards to protect artists, including systems designed to detect and prevent unauthorized content, human review, and reporting and takedown processes.'

The Guardian Technology
02

Vibe check from inside one of AI industry's main events: 'Claude mania'

industry
Apr 11, 2026

At the HumanX AI conference in San Francisco, Anthropic's Claude Code (an AI coding agent, a tool that generates, edits and reviews code) has become the dominant topic in the AI industry, surpassing OpenAI's influence among executives and investors. Despite a legal dispute with the Department of Defense, Anthropic continues to gain momentum, with Claude Code generating over $2.5 billion in annualized revenue since its May 2025 public launch. The company's focus on coding rather than spreading resources across multiple AI products has positioned it well to capture enterprise contracts.

CNBC Technology
03

ChatGPT rolls out new $100 Pro subscription to challenge Claude

industry
Apr 10, 2026

OpenAI has launched a new $100 Pro subscription tier to compete with Claude's pricing and target coders and enterprises. The new Pro plan sits between the existing $20 Plus and $200 Pro Max tiers, offering 5x higher usage limits than Plus and access to advanced features like Codex (a code-generation tool), deep research, and GPT-5. OpenAI's strategy mirrors Anthropic's approach of offering a mid-tier subscription designed specifically for people doing complex, high-stakes work.

BleepingComputer
04

Man arrested after Sam Altman's house hit with Molotov cocktail, OpenAI headquarters threatened

security
Apr 10, 2026

A 20-year-old man was arrested after throwing a Molotov cocktail (a homemade incendiary weapon) at OpenAI CEO Sam Altman's home and then threatening arson at the company's San Francisco headquarters. No one was injured in the attack, and the suspect was taken into custody with charges pending. The incident occurred during a controversial period for OpenAI involving military partnerships and litigation.

CNBC Technology
05

Vance, Bessent questioned tech giants on AI security before Anthropic's Mythos release

policysecurity
Apr 10, 2026

U.S. government officials, including Vice President JD Vance and Treasury Secretary Scott Bessent, met with tech CEOs from companies like Anthropic, OpenAI, Google, and Microsoft to discuss the security of large language models (AI systems trained on large amounts of text data) and how to protect against cyber attacks before Anthropic released its new Mythos model. Anthropic briefed government officials on the model's capabilities, including potential offensive and defensive cybersecurity applications, and emphasized that bringing the government into the conversation early about risks and safety measures was a priority.

CNBC Technology
06

CVE-2026-40252: FastGPT is an AI Agent building platform. Prior to 4.14.10.4, Broken Access Control vulnerability (IDOR/BOLA) allows any

security
Apr 10, 2026

FastGPT (a platform for building AI agents) has a broken access control vulnerability (IDOR/BOLA, a flaw where one user can access another user's data by guessing or changing IDs) that allows any authenticated team to run AI applications belonging to other teams by using a different application ID. The system checks that users are logged in but doesn't verify that the application they're trying to use actually belongs to their team, leading to unauthorized access to private AI workflows across teams.

Fix: This vulnerability is fixed in version 4.14.10.4. Users should upgrade to FastGPT 4.14.10.4 or later.

NVD/CVE Database
07

GHSA-75hx-xj24-mqrw: n8n-mcp has unauthenticated session termination and information disclosure in HTTP transport

security
Apr 10, 2026

n8n-mcp (a tool for connecting AI systems to external services) had security problems where certain HTTP endpoints (the connection points a program offers over the internet) didn't require authentication and exposed sensitive system information. An attacker with network access could shut down active sessions and gather details to plan further attacks.

Fix: Fixed in v2.47.6, where all MCP session endpoints now require Bearer authentication (a token-based security method). If you cannot upgrade immediately, you can restrict network access using firewall rules, reverse proxy IP allowlists, or a VPN to allow only trusted clients. Alternatively, use stdio mode (MCP_MODE=stdio) instead of HTTP mode, since stdio transport does not expose HTTP endpoints and is not affected by this vulnerability.

GitHub Advisory Database
08

GHSA-fw9q-39r9-c252: LangSmith Client SDKs has Prototype Pollution in langsmith-sdk via Incomplete `__proto__` Guard in Internal lodash `set()`

security
Apr 10, 2026

The LangSmith JavaScript SDK contains a prototype pollution vulnerability (a type of attack where an attacker modifies the base object that all JavaScript objects inherit from) in its internal lodash `set()` function. The vulnerability exists because the code only blocks the `__proto__` key but allows attackers to bypass this protection using `constructor.prototype` instead, potentially affecting all objects in a Node.js application if they control data being processed by the `createAnonymizer()` API.

Fix: Fixed in version 0.5.18. Users should update their `langsmith` package to 0.5.18 or later.

GitHub Advisory Database
09

GHSA-8x8f-54wf-vv92: PraisonAI Browser Server allows unauthenticated WebSocket clients to hijack connected extension sessions

security
Apr 10, 2026

PraisonAI's browser bridge server (started with `praisonai browser start`) has a security flaw where it accepts WebSocket connections (a two-way communication channel between a client and server) without proper authentication checks. An attacker on the network can connect without credentials, trick the server into linking their connection to a legitimate browser extension session, and then intercept all commands and responses from that session, effectively taking control of the browser automation without permission.

GitHub Advisory Database
10

GHSA-ffp3-3562-8cv3: PraisonAI: Coarse-Grained Tool Approval Cache Bypasses Per-Invocation Consent for Shell Commands

securitysafety
Apr 10, 2026

PraisonAI Agents has a security flaw where tool approval decisions are cached by tool name only, not by the specific command arguments. Once a user approves the `execute_command` tool (a function that runs shell commands) for any command like `ls -la`, all future shell commands in that session bypass the approval prompt entirely. Combined with the fact that all environment variables (including API keys and credentials) are passed to subprocesses, an LLM agent can silently steal sensitive data without asking permission again.

GitHub Advisory Database
123...302Next
critical

CVE-2026-40111: PraisonAIAgents is a multi-agent teams system. Prior to 1.5.128, he memory hooks executor in praisonaiagents passes a us

CVE-2026-40111NVD/CVE DatabaseApr 9, 2026
Apr 9, 2026
critical

GHSA-2763-cj5r-c79m: PraisonAI Vulnerable to OS Command Injection

GitHub Advisory DatabaseApr 8, 2026
Apr 8, 2026
critical

GHSA-qf73-2hrx-xprp: PraisonAI has sandbox escape via exception frame traversal in `execute_code` (subprocess mode)

CVE-2026-39888GitHub Advisory DatabaseApr 8, 2026
Apr 8, 2026
critical

Hackers exploit a critical Flowise flaw affecting thousands of AI workflows

CSO OnlineApr 8, 2026
Apr 8, 2026