Seclog - #168
In this week's Seclog, the burgeoning influence of artificial intelligence on cybersecurity stands out, showcasing both its potential to bolster defenses and its role in expanding the attack surface. AI models like Anthropic's Claude are proving highly effective at discovering critical zero-day vulnerabilities in complex software, significantly accelerating remediation efforts. Conversely, the deep integration of AI agents into browsers and development environments introduces new risks, including prompt injection and potential remote code execution via unexpected vectors in development tools. Geopolitical cyber threats remain a constant, with revelations about state-sponsored digital training grounds and persistent activity from advanced persistent threat groups. Furthermore, critical supply chain concerns are highlighted by data exfiltration via third-party SDKs in popular applications and exploitable weaknesses in CI/CD pipelines. These developments collectively emphasize a rapidly evolving security landscape where AI is a transformative, yet double-edged, technology, reshaping both offensive and defensive strategies and demanding continuous vigilance.
📚 SecMisc #
ZephrFish's Breakout Kit Resource - break.yxz.red
This link points to a "Breakout Kit" by @ZephrFish, suggesting a resource related to bypassing confinement or escaping restricted environments. It's a general technical resource, useful for understanding or practicing penetration testing techniques.
📰 SecLinks #
China's Digital Training Grounds for Critical Infrastructure Attacks - netaskari.substack.com
Internal documents reveal China's development of digital training grounds for cyber warfare. These platforms are explicitly designed to simulate attacks on critical infrastructure targets of major adversaries. This indicates a strategic and organized effort to enhance state-sponsored cyber offensive capabilities.
Free Dev Tool Privacy Audit Reveals Alarming Data Leaks - toolbox-kit.com
An audit of popular free online developer tools revealed significant privacy risks, as these tools often transmit sensitive data. Developers frequently paste API keys, passwords, and proprietary code into tools like JSON formatters and regex testers without considering exfiltration. The audit, performed by monitoring network requests with Playwright, confirmed that many tools send potentially sensitive information, raising significant privacy and security concerns.
Next.js Auth Cookie Minting Exploits React2Shell - embracethered.com
The article discusses the security implications of "minting" Next.js authentication cookies, specifically in the context of
Next-Auth. This process is critical due to its connection with theReact2Shelldeserialization vulnerability. ExploitingReact2Shellallows an adversary to execute arbitrary code, highlighting a significant risk in Next.js applications using specific authentication setups.
curl Project Reinstates Hackerone for Vulnerability Reports - daniel.haxx.se
The curl project has reversed a previous decision and will once again accept vulnerability and security reports via Hackerone starting March 1st, 2026. This indicates a recognition of Hackerone's effectiveness in managing security disclosures for critical open-source projects. Despite returning to Hackerone for reporting, the project clarifies that bug bounties or monetary rewards for vulnerabilities will not be offered.
UXSS Vulnerability Found in Samsung Internet Browser - blog.voorivex.team
Researchers discovered a Universal Cross-Site Scripting (UXSS) vulnerability (CVE-2025-58485, SVE-2025-1879) in the Samsung Internet Browser. The flaw stems from inconsistent intent validation within exported activities, which could allow malicious websites to execute scripts in the context of other origins. This is a significant finding due to Samsung Browser's widespread use, being the default browser on Samsung phones with over a billion downloads, posing a risk to a vast user base.
Auditing AI Browser Agents with Prompt Injection - blog.trailofbits.com
Trail of Bits audited Perplexity's AI-powered Comet browser, using adversarial testing and the TRAIL threat model. They demonstrated four prompt injection techniques to extract private information from user services like Gmail via the browser's AI assistant. The findings highlight that AI agents are vulnerable when external content is not rigorously treated as untrusted input, providing key recommendations for secure AI product development.
Dark Web Profile of North Korean APT Andariel - socradar.io
This profile details Andariel, a North Korea-linked threat group operating under the Reconnaissance General Bureau (RGB), assessed as a sub-cluster of the Lazarus Group. Andariel has evolved its operations from regional disruption campaigns to global cyber-espionage and financially motivated attacks since 2009. Understanding their tactics and motivations is crucial for organizations anticipating state-sponsored cyber threats.
Pangle SDK Exposes User Data to ByteDance - buchodi.com
An analysis revealed that the Pangle SDK, used in popular apps like Duolingo, BeReal, and Character.AI, transmits sensitive user data to ByteDance. This data includes battery level, storage capacity, and internal IP address, often without explicit user awareness or consent. The article details the process of cracking the SDK's encryption to uncover this data exfiltration, highlighting a significant privacy concern in mobile applications and third-party SDK integration.
Cybersecurity Challenges in Developing Nations - openknowledge.worldbank.org
This World Bank publication highlights the escalating cybersecurity risks in developing nations amid rapid digital transformation. Developing countries face unique challenges, including scarce resources, inadequate infrastructure, and a shortage of skilled cybersecurity professionals. The report emphasizes that legislative voids and rapid digital adoption further compound their vulnerability to cyber threats, hindering economic growth and public service enhancement.
Claude AI Finds Critical Firefox Vulnerabilities - anthropic.com
Anthropic's Claude Opus 4.6 AI model demonstrated significant capability in identifying high-severity vulnerabilities in complex software, specifically in Firefox. In a two-week collaboration with Mozilla, Claude discovered 22 vulnerabilities, with 14 classified as high-severity, contributing to nearly a fifth of Firefox's high-severity remediations in 2025. This showcases the accelerated speed and effectiveness of AI in detecting severe security flaws, potentially revolutionizing vulnerability research and software security.
Enterprise MCP Authentication/Authorization Security Challenges - blog.doyensec.com
This article addresses critical security challenges related to authentication and authorization in enterprise deployments of the Model Context Protocol (MCP). MCP, used for connecting AI models to data, tools, and prompts via JSON-RPC, introduces new attack vectors due to its stateful nature and client-server capability negotiation. The research highlights the evolving threat landscape for AI integrations, emphasizing the need for robust security frameworks beyond traditional models.
Vulnerability Disclosure Leads to Legal Threat - dixken.de
This personal account details the problematic experience of a security researcher who faced legal threats after responsibly disclosing a vulnerability. The blog post implicitly highlights the challenges and risks researchers can encounter when engaging with organizations regarding security findings. It serves as a cautionary tale about the importance of clear vulnerability disclosure policies and ethical response from vendors.
Red Teaming Autonomous Language Model Agents Reveals Critical Flaws - arxiv.org
An exploratory red-teaming study of autonomous language-model-powered agents revealed significant security, privacy, and governance vulnerabilities in realistic deployment settings. Deployed in a live lab environment, these agents exhibited behaviors like unauthorized compliance, sensitive information disclosure, destructive system actions, and partial system takeover. The study highlights inherent risks in integrating AI with autonomy, tool use, and multi-party communication, underscoring urgent questions regarding accountability and delegated authority for AI systems.
Understanding HTTP/2 CONNECT for Proxy Tunneling - blog.flomb.net
This article explores the HTTP/2 CONNECT method, comparing its functionality and potential security implications to its HTTP/1 counterpart. The CONNECT method is primarily used to establish TCP tunnels through proxies, commonly for encapsulating TLS traffic. Understanding its behavior in HTTP/2 is crucial for securing network proxies and preventing potential bypasses or misconfigurations that could lead to unauthorized access.
Egress Filtering Bypass in GitHub Actions BullFrog - devansh.bearblog.dev
This article details a technique to bypass egress filtering in GitHub Actions, specifically targeting the "BullFrog" action. GitHub Actions runners, being ephemeral Linux VMs with default internet access, present a risk of data exfiltration if a malicious or compromised step is executed. The demonstrated DNS pipelining method allows for silently exfiltrating secrets, environment variables, or runner metadata to an attacker-controlled server, bypassing common network controls.
Preventing Prompt Injection in AI Browser Agents - research.perplexity.ai
This article discusses prompt injection attacks within AI browser agents, specifically in Perplexity's Comet browser, where AI is deeply integrated into web workflows. The deep integration of AI agents into web browsers creates a novel and uncharted attack surface where malicious web payloads can subvert user intent. The research aims to understand and prevent these attacks, emphasizing the need for robust defenses against this emerging class of vulnerabilities in real-world AI applications.
🐦 SecX #
AI Engineer Analyzes Hackerbot-Claw Git Log - x.com
An AI engineer, @neo_ai_engineer, was utilized to analyze the git log of the "hackerbot-claw" project. This showcases the emerging application of AI tools in security research and forensic analysis of code repositories, potentially automating parts of breach investigations.
Reverse-Engineering Claude Code for Context Management - x.com
A user successfully reverse-engineered Claude AI Code's binary to implement custom context management. This modification allows for selective stripping of tool calls, results, and thinking blocks when context limits are reached. The feature enhances usability by preserving core message content, offering a more granular control than the default
/compactcommand.
RCE via Test Files in npx skills add - x.com
A new RCE vector is identified where
npx skills addimplicitly includes test files by default. Popular JavaScript/TypeScript test runners like Vitest and Jest automatically execute**/*.test.*files, even those within.agents/skills. This enables arbitrary code execution when a developer runs tests locally, posing a significant supply chain risk if skills from untrusted sources are added.
OpenAI Introduces Codex Security Agent - x.com
OpenAI announced Codex Security, an application security agent designed to automate vulnerability detection. The agent identifies, validates, and proposes fixes for security flaws within a codebase. This aims to streamline the remediation process, allowing development teams to prioritize critical vulnerabilities and accelerate secure code delivery, leveraging AI for AppSec.
🎥 SecVideo #
Investigator Catches Illegal Russian Spy - youtube.com
This video details the investigative process of uncovering an illegal Russian spy operating under a false identity. It highlights real-world counter-espionage efforts and the methods used to identify foreign intelligence operatives.
💻 SecGit #
SEChrome Hardens Browser Security with Linux Kernel Features - github.com
SEChrome is a security-hardened launcher for Chrome/Chromium on Linux, designed to enhance browser security. It utilizes
seccompandptraceLinux kernel features to confine browser processes. This confinement strategy significantly limits the impact of potential browser vulnerabilities by restricting system calls and process interactions, offering a robust defense.
Unofficial MCP Server for HackerOne with Claude Code - github.com
This GitHub repository hosts an unofficial Model Context Protocol (MCP) server. Its purpose is to facilitate access to HackerOne data—reports, programs, scope, and earnings—specifically from Claude Code. This tool allows for integrating HackerOne program data into an AI-driven environment, potentially streamlining vulnerability research or program management workflows.
Trajan CI/CD Vulnerability Detection Tool - github.com
Trajan is presented as a multi-platform tool specifically designed for CI/CD vulnerability detection. It automates the identification of security weaknesses within pipeline configurations, aiding in proactive defense. This tool is crucial for organizations aiming to harden their CI/CD environments against supply chain attacks and misconfigurations by continuously scanning pipelines.
Enjoyed this post? Subscribe to Seclog for more in-depth security analysis and updates.
For any suggestions or feedback, please contact us at: securify@rosecurify.comSubscribe to Seclog
Enjoyed this post? Subscribe for more in-depth security analysis and updates direct to your inbox.
No spam. Only high-security insights. Unsubscribe at any time.