Most people still imagine financial fraud as something that requires malware, 0-days, or elite exploits. Sometimes the most damaging scenarios don't come from vulnerabilities — they come from abusing what the system already trusts.
No CVEs. No alerts triggered. Full impact — end-to-end banking fraud validated in a real engagement.
This case comes from a real mission with a very explicit goal: "Validate whether an end-to-end banking fraud is possible."
Not "find vulnerabilities". Not "produce a pentest report". Validate whether money can move without authorisation, without triggering a single alert. We act as a threat actor, not as an auditor from any "cyber" company.
The Setup
The client was a financial institution with a mature security posture on paper: EDR deployed, SOC with 24/7 coverage, MFA enforced, quarterly pentests. They had done everything right according to the checklist.
The engagement was scoped as a threat-led simulation — we were given a specific threat actor profile combining internal fraud vectors with external access, and a clear success criterion: initiate a transaction that would not be detected or reversed within the engagement window.
No social engineering of staff. No phishing. We started from a position of partial insider access — a realistic assumption given that most large financial fraud cases involve some degree of insider knowledge or compromise.
0CVEs used
0Alerts triggered
4h17Initial access → cleared transaction
100%Objective achieved
The Attack Chain
Phase 01
Reconnaissance without touching the target
Before touching a single system, we spent two weeks in open-source intelligence. The goal was to map the banking platform stack (vendor, version, integrations), the transaction approval workflow — who approves what, at what threshold — and the monitoring architecture (what gets logged, what generates alerts).
This phase generated more value than the exploitation itself. Understanding the trust model of the banking system — what it considers a "normal" transaction — is the foundation of everything that follows.
Phase 02
Establishing foothold via trusted process abuse
The initial access vector was not a vulnerability. It was a legitimate process: a batch import functionality used by the operations team to process high-volume transactions. The format was documented internally, the credentials were obtained through a credential stuffing attack against a partner portal that reused authentication tokens.
No exploit. No CVE. A legitimate feature, abused by someone who understood how it worked.
The batch import accepted a specific XML format. Transactions submitted via this channel were processed with reduced scrutiny — they were considered pre-validated by the upstream operations workflow.
Phase 03
Transaction injection
Using the batch import channel, we submitted a transaction set that met the following conditions:
Amount below the automated review threshold
Beneficiary account matching a pattern consistent with regular supplier payments
Timestamp within normal business hours
Reference field populated with a realistic invoice number format
The transaction cleared. No alert was triggered. The SOC did not flag it. From initial access to cleared transaction: 4 hours and 17 minutes.
What the SOC Missed — and Why
Post-engagement debrief with the blue team revealed three systemic gaps:
01
The batch import channel was invisible to transaction monitoring
It was treated as an internal operational process rather than an external-facing attack surface. The assumption was that only internal staff with approved credentials could use it — which was correct, until those credentials were compromised.
02
Anomaly detection was tuned for volume, not pattern
The monitoring rules looked for unusual transaction volumes or amounts. A single transaction, correctly formatted, below threshold, from a known source — was invisible. Behavioral baselining was absent for this channel.
03
The beneficiary validation check had a timing gap
Real-time beneficiary screening ran asynchronously. The transaction cleared before the screening result was returned. In normal operations, this gap is irrelevant. In an adversarial context, it is the entire attack surface.
Takeaways for Financial Institutions
This engagement is not unusual. Variations of this chain appear in every financial institution we assess. The specific details change — the vector, the platform, the channel — but the underlying structure is consistent.
Trust model
Trust the process, not just the credential
Legitimate channels with valid credentials are not safe by definition. Every internal process that touches financial data is a potential attack surface.
Control timing
Async controls are detection, not prevention
Any validation that runs after a transaction clears cannot prevent fraud. Understand which controls are preventive and which are detective — and close the gaps.
Assessment model
Simulation beats assessment
A traditional pentest would have found the credential exposure. Only a threat-led simulation finds the transaction injection path — because that requires understanding business logic, not just technology.
The question is not "do we have MFA?" The question is "can an attacker move money without triggering an alert?" If you don't know the answer, you need someone to find out before an attacker does.
This engagement was conducted under a formal contract with full authorisation. Details have been modified to protect client confidentiality. Originally published on LinkedIn — November 25, 2025.
Understand your real exposure before an attacker does.
Finance-specific threat-led engagements. We validate what an attacker with a specific objective could achieve against your institution — not whether your controls exist, but whether they work.