load testing · for real flows

Hit the real flow.
Not synthetic endpoints.

Record a real user journey once. BatteringRam parameterises the dynamic bits — IDs, tokens, CSRF nonces — and replays the whole sequence at high concurrency from a fleet of Rust runners.

No credit card. Open the trial, capture some traffic, fire a run.
run · checkout-flow
LIVE
rps6,431
p506 ms
p9542 ms
p9961 ms
errors0.02%
4 runners · concurrency 50 · 1m 20s elapsed

Most load testers hit one URL.
We replay the whole sequence.

The bugs that show up under real load aren't "can our /search endpoint do 10 000 RPS". They're "the signup → verify → login → create-order → pay sequence breaks at concurrency 80 because of a session-lookup race".

BatteringRam captures the sequence once — via a SOCKS proxy or our Chrome extension — auto-detects values that flow from one response into later requests, and replays the whole thing from a fleet of Rust runners that can sustain thousands of RPS per machine.

ai inside

An engineer in the loop —
without the cost of one.

BatteringRam ships with three AI surfaces. None of them are chat. They look at the artifacts you already have — captures, paths, run results — and act.

workflows
Realistic flow generation
Point it at your OpenAPI spec or a handful of captured requests. It composes plausible end-to-end user journeys — signup → verify → first action → upgrade — with parameter variation by user archetype.
optimisation
Run-result advice
After every benchmark, an LLM reads the histogram + per-path stats and writes a short, opinionated report: where the bottleneck is, what to try next, and which paths drifted versus your last good run.
compliance
PII & secrets, found and scrubbed
Captures are scanned automatically for emails, phone numbers, government IDs, payment card patterns, JWTs, bearer tokens, and API keys. Each match is flagged with the right scrubber pre-selected — randomise, redact, regenerate per VU.
compliance
Audit-ready data trails
Every redaction, replay, and report is logged with who, when, and which rule fired — exportable for SOC 2 / ISO evidence. AI suggestions are diffable; nothing changes your captures without your say-so.
Your data does not train any model. BatteringRam uses provider APIs with retention disabled and stores no prompts/completions beyond the run that needed them. You can self-host the AI worker on your own infra; Enterprise plans run the LLM inside your VPC by default.
how it works

Record. Edit. Run. Compare.

step 01
Record
Send traffic through our SOCKS5 proxy — or install the Chrome extension and click Start capture. Every request lands in your project.
step 02
Edit
Select the contiguous requests that form a path. Confirm auto-detected dynamic links. Mark sensitive fields to randomise.
step 03
Run
Pick paths, concurrency, max time or request budget. Watch live percentiles stream in over Server-Sent Events.
step 04
Compare
The run report flags regressions against your last good baseline. Export HTML, Markdown, or PDF. Trigger from CI via the public API.
capabilities

Built for sequences, not one-shots.

Capture, not script
Record traffic through the SOCKS proxy or the Chrome extension. No hand-written test scripts.
Dynamic values, handled
IDs and tokens that flow response → next-request are auto-detected and substituted per virtual user.
Rust runner
Pull-based multi-VU agent. Sustains thousands of RPS per machine with sub-millisecond p50 on commodity hardware.
Live percentiles
RPS and p50/p95/p99 stream over SSE. Export the run as HTML, Markdown, or PDF.
Auto-scaled fleets
Optional AWS auto-provisioning: spin up EC2 runners on demand, stop them the moment runs drain. No idle bill.
Scrub before you replay
Mark fields to randomise (emails, phones, UUIDs) or regex-replace. Keep real production data out of replays.
pricing

Start free. Upgrade when you outgrow it.

14 days on the trial. After that, pick the plan that matches how many projects you're benchmarking.

Trial
See if the shape fits.
Free14 days
  • 1 project
  • Capture via SOCKS proxy or Chrome extension
  • Concurrency capped at 5 / runner
  • Run duration capped at 60s
  • 3 runs per 24h
  • 14 days — then read-only until you upgrade
most chosen
Small Business
One team, full firepower.
$153/ year
equivalent to $12.75/mo, billed annually
  • 3 projects
  • Full benchmarking — unlimited concurrency, duration, runs
  • Live percentile charts + HTML / Markdown / PDF reports
  • Email support
Agency
Run benchmarks for many clients.
$306/ year
equivalent to $25.50/mo, billed annually
  • 25 projects included
  • +$1/mo per additional project (annualised when billed yearly)
  • Everything in Small Business
  • Priority support
Enterprise
Dedicated AWS, custom everything.
Talk to us
  • Dedicated AWS auto-pool — geo-distributed runners
  • Unlimited projects
  • Custom integrations, SSO
  • SLA + named support

FAQ

Why not just write a k6 script?
Because production failures usually involve a sequence of requests with shared state, and the dynamic IDs that thread through them. BatteringRam captures the sequence and parameterises the IDs for you. The whole point is that you don't write the script.
Do you decrypt HTTPS?
The SOCKS proxy uses a generated CA you install locally — same pattern as Charles Proxy or mitmproxy. The Chrome extension uses the DevTools Protocol so it reads bodies after TLS, no CA install needed.
Where do the runners run?
On servers you bring (SSH-bootstrapped from the admin UI) or on AWS EC2 we auto-provision and shut down for you when no runs are queued. Enterprise gets dedicated AWS capacity.
Does the AI see my production data?
Only the chunks needed for the specific suggestion it's making, and only after the secret-scanner has scrubbed the obvious things (tokens, PII, payment patterns). Provider retention is disabled. Self-host the AI worker for full isolation; Enterprise runs it inside your VPC.
Can I run from CI?
Yes. Everything in the UI is exposed under /api/v1 using API keys. Create a project, build paths, kick off runs, fetch reports — all from a shell script.

Stop guessing.
Start hammering.

Record one flow today. See how it behaves at concurrency 50. The whole trial fits in 14 days.