Optimize scans
Left to its own devices, Apex Recon will try to optimize itself to match any given circumstance, but there are limitations to what it can do automatically.
If a scan is taking too long, chances are that there are ways to make it go much faster by taking a couple of minutes to configure the system to closer match your needs.
In addition to performance, the following options also affect resource usage so you can experiment with them to better match your available resources as well.
- Ensure server responsiveness
- Balance RAM consumption and performance
- Reduce RAM consumption by avoiding large resources
- Don’t follow redundant pages
- Adjust the amount of browser workers
- Narrow recon to specific sink kinds
Ensure server responsiveness
By default, Apex Recon will monitor the response times of the server and throttle itself down if it detects that the server is getting stressed. This happens in order to keep the server alive and responsive and maintain a stable connection to it.
However, there are times with weak servers when they die before Apex Recon gets a chance to adjust itself.
You can bring up the scan statistics on the CLI screen by hitting Enter, in
which case you’ll see something like:
[~] Currently auditing http://testhtml5.vulnweb.com/ajax/popular?offset=0
[~] Burst response time sum 6.861 seconds
[~] Burst response count 29
[~] Burst average response time 1.759 seconds
[~] Burst average 0 requests/second
[~] Original max concurrency 10
[~] Throttled max concurrency 2
We can see that the server is having a hard time from the following values:
- Burst average: 3 requests/second
- Burst average response time 1.759
- Burst average: 0 requests/second
- Throttled max concurrency: 2
The response times were so high (1.75 seconds) that Apex Recon had to throttle its HTTP request concurrency from 10 requests to 2 requests, which would result in a drastically increased scan time.
You can lower the default HTTP concurrency and try again to make sure that the server at no point gets a stressful load:
--http-request-concurrency=5
Balance RAM consumption and performance
Most excessive RAM consumption issues are caused by large (or a lot of) HTTP requests, which need to be temporarily stored in memory in order for them to later be scheduled in a way that achieves optimal network concurrency.
To cut this short, having a lot of HTTP requests in the queue allows Apex Recon to be better at performing a lot of them at the same time, and thus makes better use of your available bandwidth. So, a large queue means better network performance.
However, a large queue can lead to some serious RAM consumption, depending on the website and type of audit and a lot of other factors.
As a compromise between preventing RAM consumption issues but still getting
decent performance, the default queue size is set to 50.
You can adjust this number to better suit your needs depending on the situation.
You can adjust the HTTP request queue size via the --http-request-queue-size option.
Reduce RAM consumption by avoiding large resources
Apex Recon performs a large number of analysis operations on each web page. This is usually not a problem, except for when dealing with web pages of large sizes.
If you are in a RAM constrained environment, you can configure Apex Recon to not download and analyze pages which exceed a certain size limit – by default, that limit is 500KB.
You can adjust the maximm allows size of HTTP response via the --http-response-max-size option.
Don’t follow redundant pages
A lot of websites have redundant pages like galleries, calendars, directory listings etc. which are basically the same page with the same inputs but just presenting different data.
Auditing the first (or first few) of such pages is often enough and trying to follow and audit them all can sometimes result in an infinite crawl, as can be the case with calendars.
Apex Recon provides 2 features to help deal with that:
- Redundancy filters: Specify
patternandcounterpairs, pages matching thepatternwill be followed the amount of times specified by thecounter.--scope-redundant-path-pattern
- Auto-redundant: Follow URLs with the same combinations of query parameters a
limited amount of times.
--scope-auto-redundant– Default is10.
Adjust the amount of browser workers
Apex Recon uses real browsers to support technologies such as HTML5, AJAX and DOM manipulation and perform deep analysis of client-side code.
Even though browser operations are performed in parallel using a pool of workers, the default pool size is modest and operations can be time consuming.
By increasing the amount of workers in the pool, scan durations can be dramatically shortened, especially when scanning web applications that make heavy use of client-side technologies.
Finding the optimal pool size depends on the resources of your machine (especially the amount of CPU cores) and will probably require some experimentation; on average, 1-2 browsers for each logical CPU core serves as a good starting point.
However, do keep in mind that more workers may lead to higher RAM consumption as they will also accelerate workload generation.
You can set this option via --dom-pool-size.
The default is calculated based on the amount of available CPU cores your system has.
Narrow recon to specific sink kinds
If you already know which class of finding you care about, restrict sink-tracing to that subset. Hits at filtered-out sinks are skipped at trace time, so they don’t pay the trace cost and they don’t bloat the report.
The five sink kinds are:
active— input reaches an exec-context sink (eval / Runtime.exec / SQL execute / innerHTML / DOM event handler). Highest leverage.body— value reflected verbatim into the response body (XSS / HTML-injection-shaped surfaces).header_name— value reflected into a response header NAME.header_value— value reflected into a response header VALUE.blind— input reaches a sink with no observable response signal (timing / out-of-band only).
The CLI defaults to the first four and skips blind because blind
hits are noise on most chatty targets. Override with
--sink-filter NAME (repeatable):
# Only exec-context findings — fastest narrowing for triage runs
bin/apex https://example.com/ --sink-filter active
# Add `blind` back to the default set
bin/apex https://example.com/ \
--sink-filter active --sink-filter body \
--sink-filter header_name --sink-filter header_value \
--sink-filter blind
The first explicit --sink-filter flag clears the default;
subsequent flags accumulate. From an MCP client, pass the same
list as options.sinks_filter (omit the key to inherit the
four-kind default; pass [] for a crawl-only run that produces
just a sitemap). The web profile UI exposes the same toggles per
profile.