DemandFlow Support Centre

API: POST /v1/query. Multi-query streaming

ReferenceAPI Reference16/04/2026Updated 16/04/2026
Run several list queries in parallel and stream the combined NDJSON result. Designed for client code that needs many related datasets in one round trip.

POST /v1/query

Runs multiple list queries in parallel, streaming the combined results as NDJSON. Use this when you need several related datasets in one round trip, for example, loading everything behind a dashboard.

Request body

A raw JSON array of query line objects. Each query line is processed independently, in parallel, and its matching records are written to the shared response stream:

[
  {
    "entity": "PPL",
    "comboKey": "comboKey",
    "query": "SUB:<your-sub-id>",
    "load": "id,name,email"
  },
  {
    "entity": "ACTION",
    "comboKey": "comboKey",
    "query": "SUB:<your-sub-id>",
    "load": "id,name,status"
  }
]

Query line fields

  • entity, uppercase entity code to query.
  • comboKey, name of the key attribute to match on. For standard queries this is literally "comboKey".
  • query, prefix to match on that key (usually starting with SUB:{your-sub-id}).
  • load (optional), comma-separated list of fields to project. If omitted, the whole record is returned.
  • limit (optional), hard cap on items returned for this query line.
  • countOnly (optional), when true, emits a single { _meta: true, _type: "count", entity, count } record instead of the data.
  • filter (optional), case-insensitive text filter { field: "message", term: "timeout" }. Use field: "*" to match anywhere in the object.
  • tsStart, tsEnd (LOG entity only), epoch-ms time range, used with a timestamp-prefixed comboKey.
  • tag (optional), arbitrary value echoed back on _meta records so you can match counts or scan stats back to the query line that produced them.

Response

Content-Type: application/x-ndjson. One JSON object per line, written as each record is read. Records from different query lines are interleaved, rely on each record's entity field to route it.

Meta records may be emitted alongside the data:

  • { _meta: true, _type: "count", entity, count, tag? }, for countOnly queries.
  • { _meta: true, _type: "scan", entity, scanned, matched, tag? }, when a text filter is active, so you know how much was read vs how much matched.
  • { _type: "error", message, entity, source }, for per-query errors that do not abort the whole stream.

Example, parallel load

curl -X POST "https://rest.demandflow.com/v1/query" \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '[
    {"entity":"PPL","comboKey":"comboKey","query":"SUB","load":"id,name"},
    {"entity":"ACTION","comboKey":"comboKey","query":"SUB","load":"id,name,status"}
  ]'

Example, case-insensitive log search over a time window

[
  {
    "entity": "LOG",
    "comboKey": "comboKey",
    "query": "SUB:<sub>|LOGSOURCE:<source>|TS:",
    "tsStart": 1776000000000,
    "tsEnd":   1776600000000,
    "filter":  { "field": "*", "term": "timeout" },
    "limit":   500,
    "tag":     "timeouts-last-week"
  }
]

Consuming the stream

const r = await fetch('https://rest.demandflow.com/v1/query', {
    method: 'POST',
    headers: { Authorization: `Bearer ${token}`, 'Content-Type': 'application/json' },
    body: JSON.stringify(queries),
});
const reader = r.body.getReader();
const decoder = new TextDecoder();
let buf = '';
while (true) {
    const { value, done } = await reader.read();
    if (done) break;
    buf += decoder.decode(value, { stream: true });
    let i;
    while ((i = buf.indexOf('\n')) !== -1) {
        const line = buf.slice(0, i); buf = buf.slice(i + 1);
        if (!line) continue;
        const row = JSON.parse(line);
        // dispatch by row.entity or row._type
    }
}

See also

apipostquerymultistreamndjsonparallelbatch

Was this article helpful?

← Back to Knowledge Base