Skip to main content

How to roll out AI in a large company without losing customers (fintech reality check)

By February 6, 2026Articles

The fastest way to destroy trust is to turn support into an automation maze. In fintech, clients tolerate self-serve until something scary happens (blocked funds, fraud, chargebacks, KYC). Then they need decisive human ownership. Revolut routes support mainly through in-app chat (with calling typically tied to paid tiers), and Payoneer’s public reviews often describe “robotic” replies and difficulty reaching a real person. The Guardian reported a case where a Revolut customer lost £40,000 to a scam, struggled to get a refund, and escalated to the UK Financial Ombudsman – exactly where weak escalation becomes reputational risk.

Here’s a rollout blueprint that protects revenue while you automate:

  1. Human-Guaranteed escalation (non-negotiable)
    Define a visible “talk to a human” path for high-risk issues, with SLAs and one-tap handoff. No loops. Don’t hide humans behind paywalls for critical incidents.
  2. Segment by risk, not by cost
    Automate low-risk intents (status, FAQs, onboarding). For money/identity/legal disputes, keep humans in front, and let AI assist behind the scenes.
  3. AI as agent-assist before agent-replacement
    Use AI to summarize context, draft replies, surface policy, and suggest next actions – while the agent stays accountable.
  4. Design for trust: transparency + control
    Label AI responses, offer “request a callback,” and give customers a case ID + status timeline.
  5. Prove quality before expanding scope
    Start in “shadow mode” (AI suggests; humans send), then unlock partial automation only where accuracy + CSAT are stable. Keep an “AI off-ramp” for edge cases.
  6. Operate AI like a regulated product
    Weekly audits: resolution accuracy, repeat-contact rate, complaints, CSAT, and churn risk by intent. Track top failure modes (hallucinated policy, missing context, wrong escalation) and fix them like bugs. Minimum dashboard: time-to-human, containment rate, reopen/repeat rate, complaint rate, churn by cohort, and CSAT delta vs human agents.

What “good” looks like (real numbers):

  • Klarna: ~2/3 of service chats handled by AI; resolution time cut from ~11 minutes to under 2 minutes; ~25% fewer repeat inquiries; CSAT on par with humans.
  • Jaja Finance: response times reduced by ~90%, down to about 15 seconds.
  • Bank of America: Erica surpassed 2B+ interactions (and later 3B) as a scalable self-service layer for everyday tasks.

Rule of thumb: automate volume, not responsibility. Aim to deflect routine tickets, but guarantee human ownership for exceptions – and make that promise visible in the UI.

Bottom line: AI should remove friction, not responsibility. If customers can’t reach a trained human when the stakes rise, AI becomes a churn engine – even if your cost per ticket looks great.


About the Author

Meet Alena Shurtakova who designs operating models for the AI era

Alena Shurtakova designs operating models for the AI era – where technology, people, and decision-making collide. Her work focuses on how organizations grow, scale, and change by redesigning the systems that govern execution, accountability, and learning.

Alena’s blueprints address key questions, such as who owns AI-involved decisions, what to automate versus keep human-led, how to reorganize workflows without friction, what to ship in order to avoid installing unused AI tools, and how to scale changes across thousands of employees.