Attacks & Vulnerabilities | LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks (3 minute read) Three vulnerabilities were disclosed in LangChain and LangGraph: a path traversal flaw (CVE-2026-34070, CVSS 7.5) that exposed arbitrary files, a deserialization bug (CVE-2025-68664, CVSS 9.3) that leaked API keys and env secrets, and an SQL injection in LangGraph's SQLite checkpoint (CVE-2025-67644, CVSS 7.3) that exposed conversation histories. Patches are out — langchain-core ≥1.2.22, langchain-core 0.3.81/1.2.5, and langgraph-checkpoint-sqlite 3.0.1. | FBI Confirms Hack of Director Patel's Personal Email Inbox (2 minute read) In retaliation for an FBI seizure of their website, the Iranian-affiliated Handala hacking group posted that they breached FBI Director Kash Patel's email inbox. The FBI confirmed that the attackers breached Patel's personal Gmail inbox and that it has taken steps to reduce the negative impact of this breach. The Handala group also published a watermarked subset of this data. | File Read Flaw in Smart Slider Plugin Impacts 500K WordPress Sites (2 minute read) Security researchers discovered a new arbitrary file-read vulnerability in the popular Smart Slider WordPress plugin that allows authenticated users, such as subscribers, to read arbitrary files. The plugin is missing an authentication check in the plugin's AJAX export function, allowing any authenticated user to export any file, including sensitive files like wpconfig.php. | | Audio Steganography in Supply Chain Attacks (7 minute read) In March, TeamPCP compromised PyPI packages, including Trivy, litellm, and the Telnyx SDK, by injecting credential-stealing malware hidden inside WAV audio files. The technique works by packing base64-encoded, XOR-encrypted payloads into valid WAV frame data, allowing it to bypass firewalls, EDR tools, and MIME-type checks, since the files are registered as harmless audio. On Linux/macOS, a detached subprocess downloaded ringtone.wav from a server, extracted the payload in memory, then deleted itself. The harvester collected env vars, SSH keys, shell history, and cloud credentials, exfiltrating them in AES-256-CBC-encrypted form to the same server. Detection methods include Shannon entropy analysis and base64 frame pattern checks. | Reverse engineering Apple's silent security fixes (8 minute read) Apple's new Background Security Improvements (BSI) mechanism silently patches Safari, WebKit, and system libraries via AEA-encrypted binary diffs applied to cryptex sidecar images, activating on next restart without user interaction. The March 17 BSI publicly disclosed one fix — CVE-2026-20643, a WebKit Navigation API same-origin bypass caused by faulty AND logic that let isSameSite short-circuit the isSameOriginAs port check — but also shipped two undisclosed fixes: a WebGL integer overflow in libANGLE's generateIndexBuffer (size_t narrowed to int with 64-bit overflow guard), and a ServiceWorker registration UAF hardening (WeakRef-to-Ref promotion in SWServerRegistration). Security teams can replicate this teardown using ipsw ota patch rsr to reconstruct patched cryptex DMGs, followed by ipsw diff for symbol-level triage and IDA Pro decompilation against the extracted dyld_shared_cache for full function-level confirmation. | Using threat modeling and prompt injection to audit Comet (6 minute read) Trail of Bits audited Perplexity's Comet AI browser using their TRAIL threat model, mapping trust boundaries between the local browser profile and Perplexity's agent servers to identify prompt injection attack vectors capable of exfiltrating Gmail contents via the AI assistant's authenticated session access. Four injection techniques were demonstrated across five proof-of-concept exploits: fake security mechanisms (CAPTCHA and validator lures), summarization instruction hijacking, fake system instructions, and fake user requests. One notable finding was that intentional typos in system warning tags were required for the exploit to succeed, as the agent flagged correctly spelled versions as fraudulent. Defenders building agentic AI products should enforce strict trust-level separation between system prompts and external page content, apply least-privilege scoping to agent-tool access, and conduct systematic red-teaming of adversarial prompt injection before deployment. | | Tracebit (Product Launch) Tracebit offers cloud-native threat deception that plants tailored canary assets across identities, endpoints, and cloud infrastructure to lure attackers, expose compromised accounts quickly, and prevent attacks, including AI-driven attacks. | trawl (GitHub Repo) trawl is an LLM-powered scraper that allows users to semantically describe what information they are looking to extract. | Bromure (GitHub Repo) Bromure creates a secure, ephemeral browsing environment in a disposable VM on macOS systems. | | Security boffins scoured the web and found hundreds of valid API keys (3 minute read) Stanford-led researchers scanned 10 million websites using TruffleHog and found 1,748 valid API credentials across 10,000 pages, including keys for AWS, Stripe, GitHub, and OpenAI. A global systemically important bank exposed cloud credentials that gave direct access to its databases. In general, credentials remained exposed for an average of 12 months, with 84% buried in JavaScript bundles. | Google Unleashes Gemini AI Agents on the Dark Web (2 minute read) Google Threat Intelligence has added a dark web intelligence service that uses Gemini AI agents to crawl up to 10M posts a day. Organizations can create an organization profile in the service, and agents will then crawl the Internet to discover non-sensitive, public information, then monitor for potentially relevant dark web postings and raise alerts. Google says the service has a 98% accuracy rating compared to traditional, keyword-based services, which generate 80-90% false positives. | The Hackers Who Tracked My Sleep Cycle (3 minute read) Attackers exploited a Glama.ai payment overdraft window by mass-creating accounts, attaching valid payment methods, and firing expensive LLM calls before charge rejection — netting ~$1,000 in credits nightly. They monitored the developer's Discord online status to time attacks during offline windows, pausing whenever he appeared active. JA4 TLS fingerprinting and ALTCHA proof-of-work proved the most durable deterrents, though no single method held indefinitely — layered friction remained the only viable defense. | | | Love TLDR? Tell your friends and get rewards! | | Share your referral link below with friends to get free TLDR swag! | | | | Track your referrals here. | | | |
0 Comments