AI Adoption in Finance
Summary: The gap between AI adoption at tech-first firms and traditional financial institutions is large and structural, driven by culture, regulation, and data infrastructure — not talent or resources.
Sources: raw/articles/simon-taylor-2026-04-26.md, raw/call-notes/carlos-2026-05-10.md, raw/call-notes/shrikant-2026-05-11.md, raw/call-notes/zain-2026-05-14.md, raw/call-notes/jie-2026-05-16.md
Last updated: 2026-05-17
Current Reality
- Most large financial institutions are still using basic ChatGPT.com internally (source: zain-call)
- ~1% of employees at big banks actively use AI tools (source: zain-call)
- Amex only approved an internal ChatGPT version at end of 2023 (source: carlos-call)
- Capital One uses Gemini for Google Workspace; analytics team uses it for scripts and reports — but no full agents due to InfoSec restrictions on Snowflake (source: jie-2026-05-16)
Leaders vs. Laggards
| Company | Status |
|---|---|
| Revolut | Proprietary foundation model (PRAGMA) in production |
| Nubank | Proprietary foundation model (nuFormer) in production |
| Mastercard | Foundation model for cyber risk (LTM) |
| Stripe | AI across all functions; non-usage is flagged; all interviews include AI competency assessment |
| Capital One | All-in on AI training; Gemini + Claude Code; InfoSec limits on data access |
| Amex | Internal ChatGPT approved end of 2023 |
| Abbott | Copilot-level only |
Why the Gap Exists
At neobanks / tech-first firms
- Modern tech stack, data accessible and clean
- Culture of experimentation; imperfection acceptable
- 99% accuracy not required (95% is fine)
- No committee-based decision making
At traditional banks
- Months just to find and scrub training data
- Regulatory mindset: formal change management, not experimentation
- Risk aversion: 99%+ accuracy required
- Workforce demographics: older average age, slower adoption
- Management not pushing top-down (source: carlos-call)
- Decisioning teams face additional constraints — fair lending act requires model interpretability; deep learning already used for credit and fraud but under scrutiny (source: jie-2026-05-16, shrikant-call)
Enterprise AI Procurement Blockers
- Compliance guardrails and sandbox environments required before deployment
- Role permissions and token-spend monitoring must be in place
- Workforce AI training needed (e.g., pension fund MD requesting $1M training budget)
- Slow procurement and budget approval cycles
(source: zain-call)
Where the Real AI Value Is
Per Shrikant (source: shrikant-call):
- LLMs give average advice — not yet personalized insight
- Text-to-voice is a red herring; real value is in creating insight from data
- Numerical data AI is underexplored: near-zero storage/compute costs, high precision (6th decimal place in risk models), large volumes of historical data being discarded
Per Carlos (source: carlos-call):
- Revolut’s PRAGMA demonstrates a 130% credit scoring uplift — first credible published evidence of foundation model benefit at scale
- McKinsey clients found core LLMs outperformed Harvey in side-by-side tests
The Execution Gap
Foundation models for finance are no longer a research problem. Per Simon Taylor:
The base models are open-weight. The frameworks are public. The papers are on arXiv. The compute is rentable.
Banks have talent and resources. The gap is execution culture: getting data and risk talent working hands-on with ML infrastructure instead of being McKonsultant’d to death.