S1ngularity Attack Weaponizes AI Tools: Claude, Gemini, and Q Used to Steal 20,000 Developer Files

S1ngularity Nx npm attack hijacked local AI CLIs (Claude, Gemini, Q) via prompt injection and exfiltrated secrets to public GitHub repos.

 

A terminal window with streaming "postinstall" and base64 blocks, ghosted AI agent icons, and a highlighted "s1ngularity-repository" on GitHub.

 A compromised Nx npm supply-chain update briefly turned developer machines and CI runners into exfiltration bots by hijacking local AI CLI agents (Claude, Gemini, Q) with a malicious prompt, then double/triple‑base64 exfiltrated secrets to public GitHub repos under the victim’s own account named “s1ngularity-repository”. Thousands of repos and at least 2,349 secrets were exposed, mostly GitHub tokens—many still valid at time of analysis.

What happened

  • On Aug 26–27 (UTC), eight malicious Nx/Nx Powerpack versions shipped to npm and stayed live for ~5 hours 20 minutes before takedown; the payload ran via postinstall in telemetry.js across nx, @nx/devkit, @nx/node, @nx/js and other packages, plus impact noted in the Nx VS Code extension.endorlabs+1

  • The malware created a public GitHub repo in the victim’s account named s1ngularity-repository (and suffixed variants), then uploaded results.b64 containing secrets and system inventory after multi‑stage base64 encoding to evade naive detections.stepsecurity+1

First documented weaponization of local AI assistants

  • The payload checked for installed developer AI CLIs (claude, gemini, q) and sent a crafted prompt instructing the assistant to enumerate sensitive files and secrets (tokens, SSH keys, wallets, .env), effectively outsourcing reconnaissance to trusted tools. Some guardrails blocked certain prompts, but researchers still observed hundreds of successful runs.semgrep+1

  • This is one of the first documented cases of malware leveraging LLM coding agents for host‑level recon and exfiltration during a supply‑chain event.infosecurity-magazine+1

Exfiltration mechanics

  • The script used the victim’s GitHub token to create a public repo and commit results.b64; code samples show triple‑base64 encoding of a JSON payload including ghToken and other harvested data, with many repos publicly crawled before GitHub mass disabled them.stepsecurity

  • Researchers observed thousands of public repos, >1,000 leaked GitHub tokens (with ~90% still valid at one point), dozens of cloud creds (AWS, npm), AI API keys, and total leaked files around 20,000 across waves.wiz+1

Two‑phase impact timeline

  • Phase 1 (Aug 26–27): Malicious npm versions published; exfil to s1ngularity‑repository repos; GitHub disabled attacker‑created repos Aug 27 ~09:00 UTC.snyk+1

  • Phase 2 (Aug 28–29): Automation used stolen tokens to mass‑publish victims’ private repos (5,500+ repos, 400+ users/orgs) until suspended—magnifying exposure even after initial takedown.devops+1

Scale and exposure

  • GitGuardian tallied 1,346 repos and 2,349 leaked secrets, mostly GitHub OAuth/PATs; Wiz observed “thousands” of repos with results.b64 and noted many tokens remained valid, urging immediate rotation.wiz+1

  • The malware also executed in CI (GitHub Actions) and via Nx VS Code extension, not just local dev shells, broadening blast radius across organizations.endorlabs+1

Developer security implications

  • AI agent trust boundary: Local AI CLIs inherit host permissions; a poisoned prompt can turn them into privileged recon engines. Solving this requires tool‑level allowlists, permission prompts, and sandboxed execution.semgrep+1

  • Postinstall risk: Any npm install can execute postinstall scripts; lockfiles, verified publishers, and no‑scripts installs for CI reduce exposure.endorlabs

Enterprise supply‑chain protection

  • Immediate actions

    • Rotate GitHub tokens, npm tokens, cloud creds; search org for s1ngularity‑repository repos and results.b64 forks; check audit logs for repo creation bursts Aug 26–29.github+1

    • Invalidate leaked PATs via org‑wide token revocation; block tokens without fine‑grained scopes; require SSO‑enforced tokens.devops

  • Medium‑term

    • Pin exact versions with lockfiles; enforce provenance/attestations (npm package integrity, Sigstore where supported); block postinstall in CI with npm ci --ignore‑scripts; isolate runners.endorlabs

    • Policy for AI tooling: deny by default on build agents; restrict AI CLIs to user workstations in a sandbox; require explicit “tool calls” confirmations for file access.semgrep+1

AI tool security considerations

  • Configure AI CLIs with least privilege; disable filesystem and network tools by default; require user confirmation or signed policies for directory traversal and file reads.semgrep

  • Centralize logging: capture prompts/outputs of AI agents on enterprise endpoints; alert on patterns like “search for tokens/SSH/.env” to detect weaponized prompts.wiz

Detection and response

  • Hunt queries

    • GitHub: org‑wide search for repos named s1ngularity‑repository* and commits to results.b64 between Aug 26–29; review Actions logs for anomalous install+postinstall runs of nx packages.stepsecurity+1

    • Endpoints/CI: file events for results.b64, sudden base64 loops, and postinstall execution of telemetry.js in Nx packages; network calls to GitHub repo creation/contents APIs from build hosts.stepsecurity+1

  • Containment

    • Revoke tokens, rotate keys, quarantine hosts that ran compromised versions, and re‑image CI runners; review package caches to purge malicious artifacts.github+1

Developer checklist (copy‑paste)

  • Search GitHub for s1ngularity‑repository on account/org; delete archives, rotate all tokens and SSH keys; force 2FA/SSO tokens.

  • npm ci --ignore‑scripts in CI; for local dev, monitor postinstall and avoid global installs from unverified publishers.

  • Lock/pin Nx and plugins to safe versions; verify against GHSA‑cxm3‑wv7p‑598c advisory; clear npm cache and node_modules before reinstall.

  • Sandbox AI CLIs; disable filesystem tools by default; review and log prompts.

Alfaiz Nova analysis: AI assistant weaponization

  • Trend: Adversaries are discovering that developer AI agents act as pre‑installed recon kits with user‑granted privileges. Expect broader malware families to ship “prompt payloads” that co‑opt local agents with flags like --yolo/--trust‑all‑tools, shifting effort from writing scanners to writing prompts.snyk+1

  • Detection: Beyond IOCs, look for behavior—postinstall spawning AI CLIs, rapid file enumeration across home, .ssh, .config, and .env, followed by GitHub API repo creation and multi‑stage base64 encoding. Policy‑gate AI tool file operations and instrument prompts for forbidden intents.stepsecurity+1

Sources

  • Snyk, Wiz, StepSecurity, Endor Labs: timelines, AI CLI prompt abuse, repo naming, and triple‑base64 exfiltration to results.b64.snyk+3

  • DevOps.com and Infosecurity: leaked secret counts (2,349+), thousands of repos, second‑phase repo mass‑publishing using stolen tokens.infosecurity-magazine+1

  • GHSA advisory from Nx maintainers, impacted packages, and remediation guidance.github

Hey there! I’m Alfaiz, a 21-year-old tech enthusiast from Mumbai. With a BCA in Cybersecurity, CEH, and OSCP certifications, I’m passionate about SEO, digital marketing, and coding (mastered four languages!). When I’m not diving into Data Science or AI, you’ll find me gaming on GTA 5 or BGMI. Follow me on Instagram (@alfaiznova, 12k followers, blue-tick!) for more. I also run https://www.alfaiznova.in for gadgets comparision and latest information about the gadgets. Let’s explore tech together!"
NextGen Digital... Welcome to WhatsApp chat
Howdy! How can we help you today?
Type here...