Setup AI
Spectre Scan integrates with two AI providers — pick whichever account you have credit on:
- OpenAI (openai.com) —
openaiplugin / provider. - Anthropic Claude (anthropic.com)
—
claudeplugin / provider.
Both expose the same surface: per-issue analysis (description, remediation, exploit, dissect, insights, patch, report) at scan time, and the Djin! side-dock chat assistant in the Pro web UI.
CLI
OpenAI:
bin/spectre --plugin=openai:apikey=YOUR_OPENAI_KEY [URL]
Claude:
bin/spectre --plugin=claude:apikey=YOUR_ANTHROPIC_KEY [URL]
Optional plugin parameters (both providers accept them):
model— override the default model (gpt-4ofor OpenAI,claude-opus-4-5for Claude).min_severity— restrict per-issue analysis to issues at or above this severity (informational,low,medium,high). Default:medium. Lower-severity findings are skipped, saving API tokens.
Example with overrides:
bin/spectre \
--plugin=claude:apikey=YOUR_KEY,model=claude-haiku-4-5,min_severity=high \
[URL]
REST API
Drive the same per-issue plugin from
POST /instances by
adding the provider to the
plugins hash on the
options body. Same apikey / model / min_severity parameters
the CLI accepts.
OpenAI:
{
"url": "http://example.com/",
"checks": ["*"],
"plugins": {
"openai": { "apikey": "YOUR_OPENAI_KEY" }
}
}
Claude:
{
"url": "http://example.com/",
"checks": ["*"],
"plugins": {
"claude": { "apikey": "YOUR_ANTHROPIC_KEY" }
}
}
Plugin parameters layer in as siblings of apikey:
{
"plugins": {
"claude": {
"apikey": "YOUR_KEY",
"model": "claude-haiku-4-5",
"min_severity": "high"
}
}
}
Only one of openai / claude should be present per scan — they
both annotate the same issues, so loading both wastes API tokens
on duplicate work.
MCP
The MCP server forwards its spawn_instance.options straight to
the engine, so the plugins shape is identical to REST — just
nested under the tools/call envelope.
OpenAI:
{
"jsonrpc": "2.0", "id": 1, "method": "tools/call",
"params": {
"name": "spawn_instance",
"arguments": {
"options": {
"url": "http://example.com/",
"checks": ["*"],
"plugins": {
"openai": { "apikey": "YOUR_OPENAI_KEY" }
}
}
}
}
}
Claude:
{
"jsonrpc": "2.0", "id": 1, "method": "tools/call",
"params": {
"name": "spawn_instance",
"arguments": {
"options": {
"url": "http://example.com/",
"checks": ["*"],
"plugins": {
"claude": { "apikey": "YOUR_ANTHROPIC_KEY" }
}
}
}
}
}
model and min_severity slot in alongside apikey exactly as
they do over REST.
The plugins key is documented in full at
spectre://options/reference
(or inlined under the Options reference
on the MCP page).
Web UI
In Settings → Djin! AI assistant:
- Pick a provider (OpenAI or Claude) from the dropdown.
- Paste your API key. Spectre Pro pings the provider on save to verify the key — invalid keys / billing issues / no model access surface as a precise error rather than a silent failure.
- Save.
That alone enables Djin! — the in-app side-dock AI assistant. Two extra toggles in the same section control the rest:
- Auto-analyze issues during scan — off by default. When on, the scanner attaches the same provider’s per-issue plugin to every scan, expanding each finding’s description / remediation / exploit fields automatically. Costs roughly one provider call per issue.
- Djin! daily token budget — per-user 24-hour cap on prompt
- completion tokens summed across every Djin! conversation.
0disables the cap.
- completion tokens summed across every Djin! conversation.
Apex Recon ships with the same Djin! chat assistant, but no per-issue plugin (Apex’s domain is input-vector discovery, not security findings — there’s nothing per-record for the AI to auto-expand). Configuring the AI key in Apex Settings only enables Djin!.