TransIT AI

Blog · May 12, 2026 · Knox Hutchinson

Anatomy of the TanStack supply-chain attack

On Monday, 2026-05-11, between 19:20 and 19:26 UTC — a six-minute window — 42 packages in the @tanstack/* namespace on the npm registry had malicious versions published. By 19:50 an outside researcher noticed; by 20:00 a response was underway; by morning, npm had marked the bad versions as deprecated and was working with the registry to pull the tarballs.

@tanstack/react-router alone is downloaded roughly 12 million times a week. Within hours the same worm had jumped to Mistral AI’s SDKs, UiPath’s automation tooling, OpenSearch’s client library, and around a hundred other packages. The total damage so far: 170+ npm packages and 2 PyPI packages compromised, 404 malicious versions, one coordinated campaign. The malware family is called Mini Shai-Hulud — a lighter-weight relative of last year’s Shai-Hulud worm.

Three things make this attack different from the supply-chain breaches you’ve seen before:

  1. No maintainer was phished, had a password leaked, or had a token stolen from their account. Nobody at TanStack lost a credential or had a workstation compromised.
  2. The malicious packages were published by TanStack’s own release pipeline, using TanStack’s own legitimate identity. To the registry — and to anyone validating signatures — these tarballs were authentic.
  3. The packages shipped with valid SLSA provenance. SLSA is the cryptographic chain-of-custody system the JavaScript world has been promoting as the answer to this exact category of attack. This is the first publicly documented case of malware shipping with valid SLSA provenance. The signature said “yes, this came from TanStack’s CI” — because it did.

This is the part worth dwelling on. The attacker didn’t need to defeat the trust system. They turned it on.

The mechanics, in plain terms

Think of npm publishing the way you’d think about a certificate authority. The CA has an issuing pipeline. The pipeline is fronted by an HSM. The HSM signs whatever’s put in front of it, authorized by a short-lived credential held by the build agent. To forge a certificate, you historically had to steal the HSM’s key.

The Mini Shai-Hulud attacker found a way to make the build agent hold the credential up to a poisoned input. Five steps:

  1. A Trojan pull request. The attacker forked TanStack’s router repository, renamed the fork to disguise its origin (zblgg/configuration — a generic name that doesn’t show up in fork-list searches), then opened a pull request. That PR triggered a misconfigured CI workflow type — one that runs the fork’s code in a privileged context, with access to the base repository’s secrets and caches. This misconfiguration has been known and named publicly for years; it’s the “Pwn Request” pattern. Plenty of projects have it.

  2. Poisoning the shared cache. Once the attacker’s code was running with privileges, it wrote a malicious dependency entry into the shared CI build cache. The cache is meant as a performance optimization — a fast key/value store keyed by the project’s dependency lockfile hash, shared across builds so that every workflow doesn’t re-download the same packages. Nothing in its design says the writer has to be the same as the reader.

  3. Waiting. The attacker did nothing else. No credentials. No login attempts. No commits to main. They left the poisoned cache and walked away.

  4. The hijack, hours later. When a TanStack maintainer merged an unrelated, legitimate PR to main, the release workflow ran. It restored its cache — which now contained the attacker’s poisoned dependency. The build pulled that dependency in. As part of normal install behavior, the dependency executed code. That code did exactly one thing: it read the build agent’s process memory (/proc/<pid>/mem) and extracted the short-lived OIDC token the agent was holding for npm publishing.

  5. Publishing the malware. With that token, the attacker’s code minted a real npm publish credential, modified the package archives in flight to include the malicious payload, and published. Every step happened inside the legitimate workflow, under the legitimate identity. The provenance record — the cryptographic document that says “this artifact was built here, at this commit, in this workflow run” — was true.

The closest infrastructure analogy is this: imagine an attacker who, without ever touching the HSM, gets unprivileged write access to the build queue. Later, when the queue is processed, the HSM signs whatever’s at the front of the queue — including the attacker’s prepared payload. The HSM did nothing wrong. The signing operation looks valid. The downstream verifier has no way to know the material being signed was injected.

What the malware steals

Mini Shai-Hulud is a credential stealer with a worm bolted on. On install, it goes hunting for everything an operations team would care about:

  • CI/CD secrets — GitHub Actions, GitLab CI, CircleCI tokens.
  • Cloud credentials — AWS environment variables, IMDSv2 metadata (169.254.169.254/latest/meta-data/iam/security-credentials/), ECS task-role endpoint (169.254.170.2), AWS Secrets Manager, AWS Systems Manager Parameter Store. GCP and Azure metadata services. Anything reachable from a runner.
  • HashiCorp Vault tokens (via the vault.svc.cluster.local:8200 service endpoint inside Kubernetes clusters).
  • Kubernetes service-account tokens mounted into pods at /var/run/secrets/kubernetes.io/serviceaccount/token.
  • Package registry tokens — npm tokens, PyPI tokens, GitHub Personal Access Tokens.
  • SSH private keys on the local filesystem.

Stolen credentials are exfiltrated through a server fronted by the Session decentralized messaging network — filev2.getsession.org, seed1.getsession.org, seed2.getsession.org, seed3.getsession.org. On the wire this looks like encrypted peer-to-peer messaging traffic, not traditional command-and-control. Standard egress filters keyed to known C2 domains do not catch it.

The worm half of the malware uses any harvested npm tokens to enumerate packages the victim has publish access to, modify those packages’ archives to include the same malicious payload, and publish poisoned versions. That’s how the campaign jumped from TanStack into Mistral, UiPath, OpenSearch, and the rest within a few hours. Every successful infection produces more publish credentials, which produce more poisoned packages, which produce more infections.

What to look for

If you’re running egress monitoring, threat-hunting on developer infrastructure, or auditing CI runners, the following indicators of compromise are worth adding to detection rules and search queries.

Network indicators (post-infection beacon traffic):

  • filev2.getsession.org
  • seed1.getsession.org, seed2.getsession.org, seed3.getsession.org
  • Unexpected outbound HTTP from a CI runner to AWS IMDSv2 (169.254.169.254)
  • Unexpected outbound HTTP from a CI runner to the ECS task-role endpoint (169.254.170.2)
  • Unusual reads of Kubernetes service-account paths from application containers

Filesystem and build indicators:

  • Files named router_init.js, router_runtime.js, or tanstack_runner.js anywhere on developer workstations or CI runners
  • Any lockfile entry naming @tanstack/setup
  • The git reference github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c appearing anywhere in any lockfile
  • Unexpected hook configurations in .claude/settings.json (a variant targets Claude Code config files)
  • Unexpected auto-run tasks in .vscode/tasks.json

Identity indicators:

  • npm publish events from accounts zblgg (numeric id 127806521) or voicproducoes (numeric id 269549300)
  • Git commits authored as claude@users.noreply.github.com — a spoofed identity the worm uses when pushing follow-on poisoned commits to repositories the victim has access to

If you operate any internal mirror or proxy of npm — a Nexus, Artifactory, or JFrog instance — search the cache for the named git reference and for any @tanstack/router* versions published between 19:20 and 19:50 UTC on 2026-05-11.

What to do

The first sort is easy: if you do not use any of the 42 affected @tanstack/* packages, you are not directly compromised by this campaign. TanStack’s Query, Table, Form, Virtual, Store, and Start families were not affected — only the Router family and its adjacent tooling. A lockfile audit will tell you. Your package manager has one built in: npm audit, pnpm audit, or yarn audit.

If your build does pull in any of the affected packages — directly, or through a transitive dependency you didn’t realize was there — the rule is straightforward but unforgiving: treat every machine that ran an install against a poisoned version, between 19:20 UTC on 2026-05-11 and the moment your registry mirror caught up to the deprecations, as compromised.

That means rotating, in order of blast radius:

  1. Every npm token any of those machines or accounts could see. This is how the worm propagates. Cut its fuel first.
  2. Every GitHub Personal Access Token loaded on any affected developer machine and every Actions secret reachable from affected workflows.
  3. AWS access keys, role-assumption credentials, and instance profile credentials for any role those machines or runners could assume. Don’t trust IAM-level scoping to bound exposure here — assume the credential was used.
  4. Vault tokens, Kubernetes service-account tokens, registry credentials for any cluster or service reachable from the affected pipeline.
  5. Any secret that lived in an environment variable on an affected runner during the window.

Don’t rely on developer memory. (“Did you run install yesterday? After the morning standup?”) Audit the lockfiles, audit the CI runs, rotate anything in scope. Twenty-seven minutes between publish and detection is plenty of time for a credential to be exfiltrated, mined for permissions, and pre-staged for later use.

The three structural lessons

Three takeaways here generalize past JavaScript, past npm, and past this specific incident.

Cryptographic provenance is necessary, but not sufficient. SLSA, sigstore, signed releases — these all answer the question “was this artifact produced by the legitimate build infrastructure?” None of them answer “was the legitimate build infrastructure compromised at the moment it produced this artifact?” Treat provenance as one signal among several. A signature tells you where something came from. It does not tell you the thing was safe.

Build caches are not a security boundary. A cache shared between an unprivileged context (a fork’s PR build) and a privileged context (the base repository’s release workflow) is, by construction, a privilege-escalation primitive. If your CI uses shared caches across trust boundaries — and most do — review your pull_request_target-style workflows, your cache scoping rules, and every place where unprivileged code can write to storage that a privileged workflow later reads.

Runtime memory on build agents is a target. Short-lived OIDC tokens were designed to limit the blast radius of a leak. They still have to live somewhere — in this case, the runner’s process memory. If attacker code can run on the runner, the “short-lived” property doesn’t help, because the attacker uses the token inside its lifetime. The right control is upstream: don’t let attacker code run on the runner in the first place. That’s a posture decision about pull-request workflows, fork permissions, and what triggers a privileged context.

None of these are new ideas. They’re the same defense-in-depth principles network engineers have applied to ACLs, VLANs, management-plane separation, and firmware integrity for decades. The packages a developer pulls from a public registry are now part of your network’s attack surface — with the same gravity as the firmware on your switches or the rules on your edge firewall. Treat them that way.